2025-04-04 18:10:59,833 [ 690930 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-04-04 18:10:59,834 [ 690930 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:97, check_args_and_update_paths) 2025-04-04 18:10:59,834 [ 690930 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:108, check_args_and_update_paths) 2025-04-04 18:10:59,834 [ 690930 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:110, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_70ne6i --privileged --dns-search='.' --memory=30709026816 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 'test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3]' test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side test_ssh_keys_authentication/test.py::test_ecdsa test_ssh_keys_authentication/test.py::test_ed25519 test_ssh_keys_authentication/test.py::test_key_with_passphrase test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase test_ssh_keys_authentication/test.py::test_rsa test_ssh_keys_authentication/test.py::test_wrong_key test_storage_azure_blob_storage/test_check_after_upload.py::test_simple test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection test_storage_azure_blob_storage/test_cluster.py::test_count test_storage_azure_blob_storage/test_cluster.py::test_format_detection test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster test_storage_azure_blob_storage/test_cluster.py::test_select_all test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards test_storage_azure_blob_storage/test_cluster.py::test_union_all test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards test_storage_hudi/test.py::test_multiple_hudi_files test_storage_hudi/test.py::test_single_hudi_file test_storage_hudi/test.py::test_types 'test_storage_iceberg/test.py::test_cluster_table_function[azure-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[azure-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-2]' 'test_storage_iceberg/test.py::test_delete_files[azure-1]' 'test_storage_iceberg/test.py::test_delete_files[azure-2]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-1]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-2]' 'test_storage_iceberg/test.py::test_delete_files[local-1]' 'test_storage_iceberg/test.py::test_delete_files[local-2]' 'test_storage_iceberg/test.py::test_delete_files[s3-1]' 'test_storage_iceberg/test.py::test_delete_files[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2]' 'test_storage_iceberg/test.py::test_filesystem_cache[s3]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-2]' 'test_storage_iceberg/test.py::test_partition_by[azure-1]' 'test_storage_iceberg/test.py::test_partition_by[azure-2]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-1]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-2]' 'test_storage_iceberg/test.py::test_partition_by[local-1]' 'test_storage_iceberg/test.py::test_partition_by[local-2]' 'test_storage_iceberg/test.py::test_partition_by[s3-1]' 'test_storage_iceberg/test.py::test_partition_by[s3-2]' test_storage_iceberg/test.py::test_restart_broken_s3 'test_storage_iceberg/test.py::test_row_based_deletes[azure]' 'test_storage_iceberg/test.py::test_row_based_deletes[hdfs]' -vvv" altinityinfra/integration-tests-runner:cd6390247eca '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: random-0.2, timeout-2.2.0, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_storage_hudi/test.py::test_multiple_hudi_files test_storage_iceberg/test.py::test_cluster_table_function[azure-1] test_ssh_keys_authentication/test.py::test_ecdsa test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3] test_storage_azure_blob_storage/test_check_after_upload.py::test_simple [gw3] [ 1%] PASSED test_ssh_keys_authentication/test.py::test_ecdsa test_ssh_keys_authentication/test.py::test_ed25519 [gw3] [ 2%] PASSED test_ssh_keys_authentication/test.py::test_ed25519 test_ssh_keys_authentication/test.py::test_key_with_passphrase [gw3] [ 3%] PASSED test_ssh_keys_authentication/test.py::test_key_with_passphrase test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase [gw5] [ 4%] PASSED test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side [gw3] [ 5%] PASSED test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase test_ssh_keys_authentication/test.py::test_rsa [gw3] [ 6%] PASSED test_ssh_keys_authentication/test.py::test_rsa test_ssh_keys_authentication/test.py::test_wrong_key [gw3] [ 7%] PASSED test_ssh_keys_authentication/test.py::test_wrong_key [gw4] [ 8%] PASSED test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3] [gw2] [ 9%] PASSED test_storage_hudi/test.py::test_multiple_hudi_files test_storage_hudi/test.py::test_single_hudi_file [gw2] [ 10%] PASSED test_storage_hudi/test.py::test_single_hudi_file test_storage_hudi/test.py::test_types [gw2] [ 11%] PASSED test_storage_hudi/test.py::test_types [gw8] [ 12%] PASSED test_storage_azure_blob_storage/test_check_after_upload.py::test_simple [gw1] [ 13%] PASSED test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection test_storage_azure_blob_storage/test_cluster.py::test_count [gw1] [ 14%] PASSED test_storage_azure_blob_storage/test_cluster.py::test_count test_storage_azure_blob_storage/test_cluster.py::test_format_detection [gw1] [ 15%] PASSED test_storage_azure_blob_storage/test_cluster.py::test_format_detection test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster [gw1] [ 16%] PASSED test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster test_storage_azure_blob_storage/test_cluster.py::test_select_all [gw1] [ 17%] FAILED test_storage_azure_blob_storage/test_cluster.py::test_select_all test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards [gw1] [ 18%] PASSED test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards test_storage_azure_blob_storage/test_cluster.py::test_union_all [gw1] [ 19%] PASSED test_storage_azure_blob_storage/test_cluster.py::test_union_all test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards [gw1] [ 20%] PASSED test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards [gw0] [ 21%] FAILED test_storage_iceberg/test.py::test_cluster_table_function[azure-1] test_storage_iceberg/test.py::test_cluster_table_function[azure-2] [gw0] [ 22%] FAILED test_storage_iceberg/test.py::test_cluster_table_function[azure-2] test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1] [gw0] [ 23%] FAILED test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1] test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2] [gw0] [ 24%] FAILED test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2] test_storage_iceberg/test.py::test_cluster_table_function[s3-1] [gw0] [ 25%] FAILED test_storage_iceberg/test.py::test_cluster_table_function[s3-1] test_storage_iceberg/test.py::test_cluster_table_function[s3-2] [gw0] [ 26%] FAILED test_storage_iceberg/test.py::test_cluster_table_function[s3-2] test_storage_iceberg/test.py::test_delete_files[azure-1] [gw0] [ 27%] PASSED test_storage_iceberg/test.py::test_delete_files[azure-1] test_storage_iceberg/test.py::test_delete_files[azure-2] [gw0] [ 28%] PASSED test_storage_iceberg/test.py::test_delete_files[azure-2] test_storage_iceberg/test.py::test_delete_files[hdfs-1] [gw0] [ 29%] PASSED test_storage_iceberg/test.py::test_delete_files[hdfs-1] test_storage_iceberg/test.py::test_delete_files[hdfs-2] [gw0] [ 30%] PASSED test_storage_iceberg/test.py::test_delete_files[hdfs-2] test_storage_iceberg/test.py::test_delete_files[local-1] [gw0] [ 31%] PASSED test_storage_iceberg/test.py::test_delete_files[local-1] test_storage_iceberg/test.py::test_delete_files[local-2] [gw0] [ 32%] PASSED test_storage_iceberg/test.py::test_delete_files[local-2] test_storage_iceberg/test.py::test_delete_files[s3-1] [gw0] [ 33%] PASSED test_storage_iceberg/test.py::test_delete_files[s3-1] test_storage_iceberg/test.py::test_delete_files[s3-2] [gw0] [ 34%] PASSED test_storage_iceberg/test.py::test_delete_files[s3-2] test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1] [gw0] [ 35%] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1] test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2] [gw0] [ 36%] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2] test_storage_iceberg/test.py::test_evolved_schema_complex[local-1] [gw0] [ 37%] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[local-1] test_storage_iceberg/test.py::test_evolved_schema_complex[local-2] [gw0] [ 38%] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[local-2] test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1] [gw0] [ 39%] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1] test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2] [gw0] [ 40%] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2] test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1] [gw0] [ 41%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1] test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2] [gw0] [ 42%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2] test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1] [gw0] [ 43%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1] test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2] [gw0] [ 44%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2] test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1] [gw0] [ 45%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1] test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2] [gw0] [ 46%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2] test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1] [gw0] [ 47%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1] test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2] [gw0] [ 48%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2] test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1] [gw0] [ 49%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1] test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2] [gw0] [ 50%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2] test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1] [gw0] [ 51%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1] test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2] [gw0] [ 52%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2] test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1] [gw0] [ 53%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1] test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2] [gw0] [ 54%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2] test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1] [gw0] [ 55%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1] test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2] [gw0] [ 56%] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2] test_storage_iceberg/test.py::test_filesystem_cache[s3] [gw0] [ 57%] PASSED test_storage_iceberg/test.py::test_filesystem_cache[s3] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1] [gw0] [ 58%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2] [gw0] [ 59%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1] [gw0] [ 60%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2] [gw0] [ 61%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1] [gw0] [ 62%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2] [gw0] [ 63%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1] [gw0] [ 64%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1] test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2] [gw0] [ 65%] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2] test_storage_iceberg/test.py::test_metadata_file_selection[azure-1] [gw0] [ 66%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[azure-1] test_storage_iceberg/test.py::test_metadata_file_selection[azure-2] [gw0] [ 67%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[azure-2] test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1] [gw0] [ 68%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1] test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2] [gw0] [ 69%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2] test_storage_iceberg/test.py::test_metadata_file_selection[local-1] [gw0] [ 70%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[local-1] test_storage_iceberg/test.py::test_metadata_file_selection[local-2] [gw0] [ 71%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[local-2] test_storage_iceberg/test.py::test_metadata_file_selection[s3-1] [gw0] [ 72%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[s3-1] test_storage_iceberg/test.py::test_metadata_file_selection[s3-2] [gw0] [ 73%] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[s3-2] test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1] [gw0] [ 74%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1] test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2] [gw0] [ 75%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2] test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1] [gw0] [ 76%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1] test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2] [gw0] [ 77%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2] test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1] [gw0] [ 78%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1] test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2] [gw0] [ 79%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2] test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1] [gw0] [ 80%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1] test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2] [gw0] [ 81%] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2] test_storage_iceberg/test.py::test_not_evolved_schema[azure-1] [gw0] [ 82%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[azure-1] test_storage_iceberg/test.py::test_not_evolved_schema[azure-2] [gw0] [ 83%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[azure-2] test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1] [gw0] [ 84%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1] test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2] [gw0] [ 85%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2] test_storage_iceberg/test.py::test_not_evolved_schema[local-1] [gw0] [ 86%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[local-1] test_storage_iceberg/test.py::test_not_evolved_schema[local-2] [gw0] [ 87%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[local-2] test_storage_iceberg/test.py::test_not_evolved_schema[s3-1] [gw0] [ 88%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[s3-1] test_storage_iceberg/test.py::test_not_evolved_schema[s3-2] [gw0] [ 89%] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[s3-2] test_storage_iceberg/test.py::test_partition_by[azure-1] [gw0] [ 90%] PASSED test_storage_iceberg/test.py::test_partition_by[azure-1] test_storage_iceberg/test.py::test_partition_by[azure-2] [gw0] [ 91%] PASSED test_storage_iceberg/test.py::test_partition_by[azure-2] test_storage_iceberg/test.py::test_partition_by[hdfs-1] [gw0] [ 92%] PASSED test_storage_iceberg/test.py::test_partition_by[hdfs-1] test_storage_iceberg/test.py::test_partition_by[hdfs-2] [gw0] [ 93%] PASSED test_storage_iceberg/test.py::test_partition_by[hdfs-2] test_storage_iceberg/test.py::test_partition_by[local-1] [gw0] [ 94%] PASSED test_storage_iceberg/test.py::test_partition_by[local-1] test_storage_iceberg/test.py::test_partition_by[local-2] [gw0] [ 95%] PASSED test_storage_iceberg/test.py::test_partition_by[local-2] test_storage_iceberg/test.py::test_partition_by[s3-1] [gw0] [ 96%] PASSED test_storage_iceberg/test.py::test_partition_by[s3-1] test_storage_iceberg/test.py::test_partition_by[s3-2] [gw0] [ 97%] PASSED test_storage_iceberg/test.py::test_partition_by[s3-2] test_storage_iceberg/test.py::test_restart_broken_s3 [gw0] [ 98%] PASSED test_storage_iceberg/test.py::test_restart_broken_s3 test_storage_iceberg/test.py::test_row_based_deletes[azure] [gw0] [ 99%] PASSED test_storage_iceberg/test.py::test_row_based_deletes[azure] test_storage_iceberg/test.py::test_row_based_deletes[hdfs] [gw0] [100%] PASSED test_storage_iceberg/test.py::test_row_based_deletes[hdfs] =================================== FAILURES =================================== _______________________________ test_select_all ________________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 cluster = def test_select_all(cluster): node = cluster.instances["node_0"] port = cluster.env_variables["AZURITE_PORT"] storage_account_url = cluster.env_variables["AZURITE_STORAGE_ACCOUNT_URL"] azure_query( node, f"INSERT INTO TABLE FUNCTION azureBlobStorage('{storage_account_url}', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1'," f"'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto', 'key UInt64, data String') " f"VALUES (1, 'a'), (2, 'b')", settings={"azure_truncate_on_insert": 1}, ) print(get_azure_file_content("test_cluster_select_all.csv", port)) query_id_pure = str(uuid.uuid4()) pure_azure = azure_query( node, f"SELECT * from azureBlobStorage('{storage_account_url}', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1'," f"'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV','auto')", query_id=query_id_pure, ) print(pure_azure) query_id_distributed = str(uuid.uuid4()) distributed_azure = azure_query( node, f"SELECT * from azureBlobStorageCluster('simple_cluster', '{storage_account_url}', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1'," f"'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV'," f"'auto')", query_id=query_id_distributed, ) print(distributed_azure) query_id_distributed_alt_syntax = str(uuid.uuid4()) distributed_azure_alt_syntax = azure_query( node, f"SELECT * from azureBlobStorage('{storage_account_url}', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1'," f"'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV'," f"'auto') " f"SETTINGS object_storage_cluster='simple_cluster'", query_id=query_id_distributed_alt_syntax, ) print(distributed_azure_alt_syntax) azure_query( node, f""" DROP TABLE IF EXISTS azure_engine_table_single_node; CREATE TABLE azure_engine_table_single_node (key UInt64, data String) ENGINE=AzureBlobStorage( '{storage_account_url}', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto' ) """, ) query_id_engine_single_node = str(uuid.uuid4()) azure_engine_single_node = azure_query( node, "SELECT * FROM azure_engine_table_single_node", query_id=query_id_engine_single_node, ) azure_query( node, f""" DROP TABLE IF EXISTS azure_engine_table_distributed; CREATE TABLE azure_engine_table_distributed (key UInt64, data String) ENGINE=AzureBlobStorage( '{storage_account_url}', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto' ) SETTINGS object_storage_cluster='simple_cluster' """, ) query_id_engine_distributed = str(uuid.uuid4()) azure_engine_distributed = azure_query( node, "SELECT * FROM azure_engine_table_distributed", query_id=query_id_engine_distributed, ) assert TSV(pure_azure) == TSV(distributed_azure) assert TSV(pure_azure) == TSV(distributed_azure_alt_syntax) assert TSV(pure_azure) == TSV(azure_engine_single_node) > assert TSV(pure_azure) == TSV(azure_engine_distributed) E AssertionError: assert 1 a\n2 b == 1 a\n2 b\n1 a\n2 b\n1 a\n2 b E + where 1 a\n2 b = TSV('1\ta\n2\tb\n') E + and 1 a\n2 b\n1 a\n2 b\n1 a\n2 b = TSV('1\ta\n2\tb\n1\ta\n2\tb\n1\ta\n2\tb\n') test_storage_azure_blob_storage/test_cluster.py:157: AssertionError ----------------------------- Captured stdout call ----------------------------- 1,"a" 2,"b" 1 a 2 b 1 a 2 b 1 a 2 b ----------------------------- Captured stderr call ----------------------------- Executing query INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto', 'key UInt64, data String') VALUES (1, 'a'), (2, 'b') on node_0 Request URL: 'http://127.0.0.1:30050/devstoreaccount1/cont/test_cluster_select_all.csv' Request method: 'GET' Request headers: 'x-ms-range': 'REDACTED' 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': '93b53fbc-1180-11f0-bbd6-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request Starting new HTTP connection (1): 127.0.0.1:30050 http://127.0.0.1:30050 "GET /devstoreaccount1/cont/test_cluster_select_all.csv HTTP/1.1" 206 12 Response status: 206 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'last-modified': 'Fri, 04 Apr 2025 18:13:57 GMT' 'x-ms-creation-time': 'REDACTED' 'content-length': '12' 'content-type': 'application/octet-stream' 'content-range': 'REDACTED' 'etag': '"0x20985F8D5B38820"' 'x-ms-blob-type': 'REDACTED' 'x-ms-lease-state': 'REDACTED' 'x-ms-lease-status': 'REDACTED' 'x-ms-client-request-id': '93b53fbc-1180-11f0-bbd6-0242ac110002' 'x-ms-request-id': '8718fa61-a526-43cc-95bd-0191efbd0e1f' 'x-ms-version': 'REDACTED' 'accept-ranges': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:13:58 GMT' 'x-ms-server-encrypted': 'REDACTED' 'x-ms-blob-content-md5': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' Executing query SELECT * from azureBlobStorage('http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV','auto') on node_0 Executing query SELECT * from azureBlobStorageCluster('simple_cluster', 'http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV','auto') on node_0 Executing query SELECT * from azureBlobStorage('http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV','auto') SETTINGS object_storage_cluster='simple_cluster' on node_0 Executing query DROP TABLE IF EXISTS azure_engine_table_single_node; CREATE TABLE azure_engine_table_single_node (key UInt64, data String) ENGINE=AzureBlobStorage( 'http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto' ) on node_0 Executing query SELECT * FROM azure_engine_table_single_node on node_0 Executing query DROP TABLE IF EXISTS azure_engine_table_distributed; CREATE TABLE azure_engine_table_distributed (key UInt64, data String) ENGINE=AzureBlobStorage( 'http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto' ) SETTINGS object_storage_cluster='simple_cluster' on node_0 Executing query SELECT * FROM azure_engine_table_distributed on node_0 ------------------------------ Captured log call ------------------------------- 2025-04-04 18:13:57 [ 673 ] DEBUG : Executing query INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto', 'key UInt64, data String') VALUES (1, 'a'), (2, 'b') on node_0 (cluster.py:3677, query) 2025-04-04 18:13:58 [ 673 ] INFO : Request URL: 'http://127.0.0.1:30050/devstoreaccount1/cont/test_cluster_select_all.csv' Request method: 'GET' Request headers: 'x-ms-range': 'REDACTED' 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': '93b53fbc-1180-11f0-bbd6-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request (_universal.py:514, on_request) 2025-04-04 18:13:58 [ 673 ] DEBUG : Starting new HTTP connection (1): 127.0.0.1:30050 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:58 [ 673 ] DEBUG : http://127.0.0.1:30050 "GET /devstoreaccount1/cont/test_cluster_select_all.csv HTTP/1.1" 206 12 (connectionpool.py:547, _make_request) 2025-04-04 18:13:58 [ 673 ] INFO : Response status: 206 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'last-modified': 'Fri, 04 Apr 2025 18:13:57 GMT' 'x-ms-creation-time': 'REDACTED' 'content-length': '12' 'content-type': 'application/octet-stream' 'content-range': 'REDACTED' 'etag': '"0x20985F8D5B38820"' 'x-ms-blob-type': 'REDACTED' 'x-ms-lease-state': 'REDACTED' 'x-ms-lease-status': 'REDACTED' 'x-ms-client-request-id': '93b53fbc-1180-11f0-bbd6-0242ac110002' 'x-ms-request-id': '8718fa61-a526-43cc-95bd-0191efbd0e1f' 'x-ms-version': 'REDACTED' 'accept-ranges': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:13:58 GMT' 'x-ms-server-encrypted': 'REDACTED' 'x-ms-blob-content-md5': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' (_universal.py:550, on_response) 2025-04-04 18:13:58 [ 673 ] DEBUG : Executing query SELECT * from azureBlobStorage('http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV','auto') on node_0 (cluster.py:3677, query) 2025-04-04 18:13:58 [ 673 ] DEBUG : Executing query SELECT * from azureBlobStorageCluster('simple_cluster', 'http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV','auto') on node_0 (cluster.py:3677, query) 2025-04-04 18:13:58 [ 673 ] DEBUG : Executing query SELECT * from azureBlobStorage('http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1','Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV','auto') SETTINGS object_storage_cluster='simple_cluster' on node_0 (cluster.py:3677, query) 2025-04-04 18:13:58 [ 673 ] DEBUG : Executing query DROP TABLE IF EXISTS azure_engine_table_single_node; CREATE TABLE azure_engine_table_single_node (key UInt64, data String) ENGINE=AzureBlobStorage( 'http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto' ) on node_0 (cluster.py:3677, query) 2025-04-04 18:13:58 [ 673 ] DEBUG : Executing query SELECT * FROM azure_engine_table_single_node on node_0 (cluster.py:3677, query) 2025-04-04 18:13:58 [ 673 ] DEBUG : Executing query DROP TABLE IF EXISTS azure_engine_table_distributed; CREATE TABLE azure_engine_table_distributed (key UInt64, data String) ENGINE=AzureBlobStorage( 'http://azurite1:30050/devstoreaccount1', 'cont', 'test_cluster_select_all.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', 'CSV', 'auto' ) SETTINGS object_storage_cluster='simple_cluster' on node_0 (cluster.py:3677, query) 2025-04-04 18:13:58 [ 673 ] DEBUG : Executing query SELECT * FROM azure_engine_table_distributed on node_0 (cluster.py:3677, query) _____________________ test_cluster_table_function[azure-1] _____________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = format_version = '1', storage_type = 'azure' @pytest.mark.parametrize("format_version", ["1", "2"]) @pytest.mark.parametrize("storage_type", ["s3", "azure", "hdfs"]) def test_cluster_table_function(started_cluster, format_version, storage_type): if is_arm() and storage_type == "hdfs": pytest.skip("Disabled test IcebergHDFS for aarch64") instance = started_cluster.instances["node1"] spark = started_cluster.spark_session TABLE_NAME = ( "test_iceberg_cluster_" + format_version + "_" + storage_type + "_" + get_uuid_str() ) def add_df(mode): write_iceberg_from_df( spark, generate_data(spark, 0, 100), TABLE_NAME, mode=mode, format_version=format_version, ) files = default_upload_directory( started_cluster, storage_type, f"/iceberg_data/default/{TABLE_NAME}/", f"/iceberg_data/default/{TABLE_NAME}/", ) logging.info(f"Adding another dataframe. result files: {files}") return files files = add_df(mode="overwrite") for i in range(1, len(started_cluster.instances)): files = add_df(mode="append") logging.info(f"Setup complete. files: {files}") assert len(files) == 5 + 4 * (len(started_cluster.instances) - 1) clusters = instance.query(f"SELECT * FROM system.clusters") logging.info(f"Clusters setup: {clusters}") # Regular Query only node1 table_function_expr = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True ) select_regular = ( instance.query(f"SELECT * FROM {table_function_expr}").strip().split() ) # Cluster Query with node1 as coordinator table_function_expr_cluster = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True, run_on_cluster=True, ) query_id_cluster = str(uuid.uuid4()) select_cluster = ( instance.query( f"SELECT * FROM {table_function_expr_cluster}", query_id=query_id_cluster ) .strip() .split() ) # Cluster Query with node1 as coordinator with alternative syntax query_id_cluster_alt_syntax = str(uuid.uuid4()) select_cluster_alt_syntax = ( instance.query( f""" SELECT * FROM {table_function_expr} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_cluster_alt_syntax, ) .strip() .split() ) create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster, object_storage_cluster='cluster_simple') query_id_cluster_table_engine = str(uuid.uuid4()) select_cluster_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_cluster_table_engine, ) .strip() .split() ) select_remote_cluster = ( instance.query(f"SELECT * FROM remote('node2',{table_function_expr_cluster})") .strip() .split() ) instance.query(f"DROP TABLE IF EXISTS `{TABLE_NAME}` SYNC") create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster) query_id_pure_table_engine = str(uuid.uuid4()) select_pure_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_pure_table_engine, ) .strip() .split() ) query_id_pure_table_engine_cluster = str(uuid.uuid4()) select_pure_table_engine_cluster = ( instance.query( f""" SELECT * FROM {TABLE_NAME} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_pure_table_engine_cluster, ) .strip() .split() ) # Simple size check assert len(select_regular) == 600 assert len(select_cluster) == 600 assert len(select_cluster_alt_syntax) == 600 > assert len(select_cluster_table_engine) == 600 E AssertionError: assert 1800 == 600 E + where 1800 = len(['0', '1', '1', '2', '2', '3', ...]) test_storage_iceberg/test.py:747: AssertionError ---------------------------- Captured stdout setup ----------------------------- 25/04/04 18:11:08 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml 25/04/04 18:14:52 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. ---------------------------- Captured stderr setup ----------------------------- Command:[docker ps | wc -l] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 ENV HOSTNAME 5495df948c8e ENV SHLVL 0 ENV HOME /root ENV OLDPWD / ENV DOCKER_HELPER_TAG 5dc43a6382f0 ENV PYTHONUNBUFFERED 1 ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e ENV UBSAN_OPTIONS print_stacktrace=1 ENV PYTEST_ADDOPTS --dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 'test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3]' test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side test_ssh_keys_authentication/test.py::test_ecdsa test_ssh_keys_authentication/test.py::test_ed25519 test_ssh_keys_authentication/test.py::test_key_with_passphrase test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase test_ssh_keys_authentication/test.py::test_rsa test_ssh_keys_authentication/test.py::test_wrong_key test_storage_azure_blob_storage/test_check_after_upload.py::test_simple test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection test_storage_azure_blob_storage/test_cluster.py::test_count test_storage_azure_blob_storage/test_cluster.py::test_format_detection test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster test_storage_azure_blob_storage/test_cluster.py::test_select_all test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards test_storage_azure_blob_storage/test_cluster.py::test_union_all test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards test_storage_hudi/test.py::test_multiple_hudi_files test_storage_hudi/test.py::test_single_hudi_file test_storage_hudi/test.py::test_types 'test_storage_iceberg/test.py::test_cluster_table_function[azure-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[azure-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-2]' 'test_storage_iceberg/test.py::test_delete_files[azure-1]' 'test_storage_iceberg/test.py::test_delete_files[azure-2]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-1]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-2]' 'test_storage_iceberg/test.py::test_delete_files[local-1]' 'test_storage_iceberg/test.py::test_delete_files[local-2]' 'test_storage_iceberg/test.py::test_delete_files[s3-1]' 'test_storage_iceberg/test.py::test_delete_files[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2]' 'test_storage_iceberg/test.py::test_filesystem_cache[s3]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-2]' 'test_storage_iceberg/test.py::test_partition_by[azure-1]' 'test_storage_iceberg/test.py::test_partition_by[azure-2]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-1]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-2]' 'test_storage_iceberg/test.py::test_partition_by[local-1]' 'test_storage_iceberg/test.py::test_partition_by[local-2]' 'test_storage_iceberg/test.py::test_partition_by[s3-1]' 'test_storage_iceberg/test.py::test_partition_by[s3-2]' test_storage_iceberg/test.py::test_restart_broken_s3 'test_storage_iceberg/test.py::test_row_based_deletes[azure]' 'test_storage_iceberg/test.py::test_row_based_deletes[hdfs]' -vvv ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge ENV COMPOSE_HTTP_TIMEOUT 600 ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV DOCKER_KERBERIZED_HADOOP_TAG latest ENV DOCKER_CHANNEL stable ENV DOCKER_CLIENT_TIMEOUT 300 ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e ENV PWD /ClickHouse/tests/integration ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config ENV TZ Etc/UTC ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java ENV DOCKER_BASE_TAG 6712d5cc610d ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 ENV LC_CTYPE C.UTF-8 ENV INTEGRATION_TESTS_RUN_ID 0 ENV WORKER_FREE_PORTS 30000 30001 30002 30003 30004 30005 30006 30007 30008 30009 30010 30011 30012 30013 30014 30015 30016 30017 30018 30019 30020 30021 30022 30023 30024 30025 30026 30027 30028 30029 30030 30031 30032 30033 30034 30035 30036 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 ENV PYTEST_XDIST_TESTRUNUID 269aa778434d4c4cac33147c10e9e07e ENV PYTEST_XDIST_WORKER gw0 ENV PYTEST_XDIST_WORKER_COUNT 10 ENV PYTEST_CURRENT_TEST test_storage_iceberg/test.py::test_cluster_table_function[azure-1] (setup) CLUSTER INIT base_config_dir:/clickhouse-config Picked up JAVA_TOOL_OPTIONS: -Djdk.attach.allowAttachSelf=true Picked up JAVA_TOOL_OPTIONS: -Djdk.attach.allowAttachSelf=true Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). GatewayClient.address is deprecated and will be removed in version 1.0. Use GatewayParameters instead. Command to send: A 72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb Answer received: !yv Command to send: j i rj org.apache.spark.SparkConf e Answer received: !yv Command to send: j i rj org.apache.spark.api.java.* e Answer received: !yv Command to send: j i rj org.apache.spark.api.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.ml.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.mllib.api.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.resource.* e Answer received: !yv Command to send: j i rj org.apache.spark.sql.* e Answer received: !yv Command to send: j i rj org.apache.spark.sql.api.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.sql.hive.* e Answer received: !yv Command to send: j i rj scala.Tuple2 e Answer received: !yv Command to send: r u SparkConf rj e Answer received: !ycorg.apache.spark.SparkConf Command to send: i org.apache.spark.SparkConf bTrue e Answer received: !yro0 Command to send: c o0 set sspark.app.name sspark_test e Answer received: !yro1 Command to send: c o0 set sspark.master slocal e Answer received: !yro2 Command to send: c o0 contains sspark.serializer.objectStreamReset e Answer received: !ybfalse Command to send: c o0 set sspark.serializer.objectStreamReset s100 e Answer received: !yro3 Command to send: c o0 contains sspark.rdd.compress e Answer received: !ybfalse Command to send: c o0 set sspark.rdd.compress sTrue e Answer received: !yro4 Command to send: c o0 contains sspark.master e Answer received: !ybtrue Command to send: c o0 contains sspark.app.name e Answer received: !ybtrue Command to send: c o0 contains sspark.master e Answer received: !ybtrue Command to send: c o0 get sspark.master e Answer received: !yslocal Command to send: c o0 contains sspark.app.name e Answer received: !ybtrue Command to send: c o0 get sspark.app.name e Answer received: !ysspark_test Command to send: c o0 contains sspark.home e Answer received: !ybfalse Command to send: c o0 getAll e Answer received: !yto5 Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i0 e Answer received: !yro6 Command to send: c o6 _1 e Answer received: !ysspark.master Command to send: c o6 _2 e Answer received: !yslocal Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i1 e Answer received: !yro7 Command to send: c o7 _1 e Answer received: !ysspark.app.name Command to send: c o7 _2 e Answer received: !ysspark_test Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i2 e Answer received: !yro8 Command to send: c o8 _1 e Answer received: !ysspark.rdd.compress Command to send: c o8 _2 e Answer received: !ysTrue Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i3 e Answer received: !yro9 Command to send: c o9 _1 e Answer received: !ysspark.serializer.objectStreamReset Command to send: c o9 _2 e Answer received: !ys100 Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i4 e Answer received: !yro10 Command to send: c o10 _1 e Answer received: !ysspark.submit.pyFiles Command to send: c o10 _2 e Answer received: !ys Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i5 e Answer received: !yro11 Command to send: c o11 _1 e Answer received: !ysspark.app.submitTime Command to send: c o11 _2 e Answer received: !ys1743790268520 Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i6 e Answer received: !yro12 Command to send: c o12 _1 e Answer received: !ysspark.submit.deployMode Command to send: c o12 _2 e Answer received: !ysclient Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i7 e Answer received: !yro13 Command to send: c o13 _1 e Answer received: !ysspark.ui.showConsoleProgress Command to send: c o13 _2 e Answer received: !ystrue Command to send: a e o5 e Answer received: !yi8 Command to send: r u JavaSparkContext rj e Answer received: !ycorg.apache.spark.api.java.JavaSparkContext Command to send: i org.apache.spark.api.java.JavaSparkContext ro0 e Command to send: A 72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb Answer received: !yv Command to send: m d o1 e Answer received: !yv Command to send: m d o2 e Answer received: !yv Command to send: m d o3 e Answer received: !yv Command to send: m d o4 e Answer received: !yv Command to send: m d o5 e Answer received: !yv Answer received: !yro14 Command to send: c o14 sc e Answer received: !yro15 Command to send: c o15 conf e Answer received: !yro16 Command to send: r u PythonAccumulatorV2 rj e Answer received: !ycorg.apache.spark.api.python.PythonAccumulatorV2 Command to send: i org.apache.spark.api.python.PythonAccumulatorV2 s127.0.0.1 i57765 s72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb e Answer received: !yro17 Command to send: c o14 sc e Answer received: !yro18 Command to send: c o18 register ro17 e Answer received: !yv Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils isEncryptionEnabled e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils isEncryptionEnabled ro14 e Answer received: !ybfalse Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout ro14 e Answer received: !yL15 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils getSparkBufferSize e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils getSparkBufferSize ro14 e Answer received: !yi65536 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.SparkFiles rj e Answer received: !ycorg.apache.spark.SparkFiles Command to send: r m org.apache.spark.SparkFiles getRootDirectory e Answer received: !ym Command to send: c z:org.apache.spark.SparkFiles getRootDirectory e Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/userFiles-a3d2ae26-afd0-4044-98de-4e80c51814e2 Command to send: c o16 get sspark.submit.pyFiles s e Answer received: !ys Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.util rj e Answer received: !yp Command to send: r u org.apache.spark.util.Utils rj e Answer received: !ycorg.apache.spark.util.Utils Command to send: r m org.apache.spark.util.Utils getLocalDir e Answer received: !ym Command to send: c o14 sc e Answer received: !yro19 Command to send: c o19 conf e Answer received: !yro20 Command to send: c z:org.apache.spark.util.Utils getLocalDir ro20 e Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.util rj e Answer received: !yp Command to send: r u org.apache.spark.util.Utils rj e Answer received: !ycorg.apache.spark.util.Utils Command to send: r m org.apache.spark.util.Utils createTempDir e Answer received: !ym Command to send: c z:org.apache.spark.util.Utils createTempDir s/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 spyspark e Answer received: !yro21 Command to send: c o21 getAbsolutePath e Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/pyspark-1321940d-cd49-407c-8d06-c6d73562e312 Command to send: c o16 get sspark.python.profile sfalse e Answer received: !ysfalse Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getDefaultSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getDefaultSession e Answer received: !yro22 Command to send: c o22 isDefined e Answer received: !ybfalse Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: c o14 sc e Answer received: !yro23 Command to send: i java.util.HashMap e Answer received: !yao24 Command to send: c o24 put sspark.app.name sspark_test e Answer received: !yn Command to send: c o24 put sspark.master slocal e Answer received: !yn Command to send: i org.apache.spark.sql.SparkSession ro23 ro24 e Answer received: !yro25 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession setDefaultSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession setDefaultSession ro25 e Answer received: !yv Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession setActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession setActiveSession ro25 e Answer received: !yv Command to send: c o14 stop e Answer received: !yv Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession clearDefaultSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession clearDefaultSession e Answer received: !yv Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession clearActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession clearActiveSession e Answer received: !yv clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log HDFS BASE CMD:{self.base_hdfs_cmd)} Cluster name: project_name:rootteststorageiceberg-gw0. Added instance name:node1 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env', '--project-name', 'rootteststorageiceberg-gw0', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:rootteststorageiceberg-gw0. Added instance name:node2 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env', '--project-name', 'rootteststorageiceberg-gw0', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:rootteststorageiceberg-gw0. Added instance name:node3 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env', '--project-name', 'rootteststorageiceberg-gw0', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Starting cluster... Running tests in /ClickHouse/tests/integration/test_storage_iceberg/test.py Cluster start called. is_up=False Docker networks for project rootteststorageiceberg-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project rootteststorageiceberg-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project rootteststorageiceberg-gw0 are DRIVER VOLUME NAME Cleanup called Docker networks for project rootteststorageiceberg-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project rootteststorageiceberg-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project rootteststorageiceberg-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/rootteststorageiceberg-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: rootteststorageiceberg-gw0 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Command to send: m d o0 e Answer received: !yv Command to send: m d o6 e Answer received: !yv Command to send: m d o7 e Answer received: !yv Command to send: m d o8 e Answer received: !yv Command to send: m d o9 e Answer received: !yv Command to send: m d o10 e Answer received: !yv Command to send: m d o11 e Answer received: !yv Command to send: m d o12 e Answer received: !yv Command to send: m d o13 e Answer received: !yv Command to send: m d o15 e Answer received: !yv Command to send: m d o18 e Answer received: !yv Command to send: m d o19 e Answer received: !yv Command to send: m d o20 e Answer received: !yv Command to send: m d o24 e Answer received: !yv Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/query_log.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/cluster.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/named_collections.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/filesystem_caches.xml'] to /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/database Setup logs dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/query_log.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/cluster.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/named_collections.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/filesystem_caches.xml'] to /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/database Setup logs dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Setup directory for instance: node3 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/query_log.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/cluster.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/named_collections.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/filesystem_caches.xml'] to /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/configs/config.d Setup database dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/database Setup logs dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'HDFS_HOST': 'hdfs1', 'HDFS_NAME_PORT': '50070', 'HDFS_DATA_PORT': '50075', 'HDFS_LOGS': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/hdfs/logs', 'HDFS_FS': 'bind', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/resolver', 'RESOLVER_LOGS_FS': 'bind', 'AZURITE_PORT': '30000', 'AZURITE_STORAGE_ACCOUNT_URL': 'http://azurite1:30000/devstoreaccount1', 'AZURITE_CONNECTION_STRING': 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:30000/devstoreaccount1;'} stored in /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --project-name rootteststorageiceberg-gw0 --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml pull] Command to send: m d o21 e Answer received: !yv Command to send: m d o22 e Answer received: !yv Command to send: m d o23 e Answer received: !yv Stderr: node1 Skipped - Image is already being pulled by node2 Stderr: node3 Skipped - Image is already being pulled by node2 Stderr: proxy2 Skipped - Image is already being pulled by proxy1 Stderr: azurite1 Pulling Stderr: hdfs1 Pulling Stderr: minio1 Pulling Stderr: node2 Pulling Stderr: resolver Pulling Stderr: proxy1 Pulling Stderr: minio1 Pulled Stderr: node2 Pulled Stderr: hdfs1 Pulled Stderr: proxy1 Pulled Stderr: resolver Pulled Stderr: f18232174bc9 Pulling fs layer Stderr: cb2bde55f71f Pulling fs layer Stderr: 9d0e0719fbe0 Pulling fs layer Stderr: 6f063dbd7a5d Pulling fs layer Stderr: f9e3e3d8f042 Pulling fs layer Stderr: a39ef2f62dc8 Pulling fs layer Stderr: 9a21c6b23f0e Pulling fs layer Stderr: efeb7b313b67 Pulling fs layer Stderr: 6fef65209747 Pulling fs layer Stderr: 3d377e512a83 Pulling fs layer Stderr: 6f063dbd7a5d Waiting Stderr: 3d377e512a83 Waiting Stderr: f9e3e3d8f042 Waiting Stderr: a39ef2f62dc8 Waiting Stderr: 9a21c6b23f0e Waiting Stderr: 6fef65209747 Waiting Stderr: efeb7b313b67 Waiting Stderr: 9d0e0719fbe0 Downloading [> ] 15.58kB/1.261MB Stderr: f18232174bc9 Downloading [> ] 48.34kB/3.642MB Stderr: 9d0e0719fbe0 Downloading [============> ] 309.5kB/1.261MB Stderr: cb2bde55f71f Downloading [> ] 506.1kB/50.34MB Stderr: f18232174bc9 Downloading [========> ] 588kB/3.642MB Stderr: 9d0e0719fbe0 Verifying Checksum Stderr: 9d0e0719fbe0 Download complete Stderr: cb2bde55f71f Downloading [==> ] 2.538MB/50.34MB Stderr: f18232174bc9 Downloading [=================================> ] 2.407MB/3.642MB Stderr: f18232174bc9 Verifying Checksum Stderr: f18232174bc9 Download complete Stderr: f18232174bc9 Extracting [> ] 65.54kB/3.642MB Stderr: cb2bde55f71f Downloading [=======> ] 7.109MB/50.34MB Stderr: f18232174bc9 Extracting [=================================> ] 2.425MB/3.642MB Stderr: f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB Stderr: f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB Stderr: f18232174bc9 Pull complete Stderr: cb2bde55f71f Downloading [===============> ] 15.75MB/50.34MB Stderr: cb2bde55f71f Downloading [===========================> ] 27.93MB/50.34MB Stderr: cb2bde55f71f Downloading [========================================> ] 41.14MB/50.34MB Stderr: cb2bde55f71f Verifying Checksum Stderr: cb2bde55f71f Download complete Stderr: cb2bde55f71f Extracting [> ] 524.3kB/50.34MB Stderr: cb2bde55f71f Extracting [======> ] 6.816MB/50.34MB Stderr: cb2bde55f71f Extracting [==============> ] 14.68MB/50.34MB Stderr: cb2bde55f71f Extracting [=====================> ] 22.02MB/50.34MB Stderr: cb2bde55f71f Extracting [=============================> ] 29.88MB/50.34MB Stderr: cb2bde55f71f Extracting [=====================================> ] 37.75MB/50.34MB Stderr: cb2bde55f71f Extracting [============================================> ] 45.09MB/50.34MB Stderr: cb2bde55f71f Extracting [==============================================> ] 47.19MB/50.34MB Stderr: cb2bde55f71f Extracting [===============================================> ] 48.23MB/50.34MB Stderr: cb2bde55f71f Extracting [================================================> ] 48.76MB/50.34MB Stderr: cb2bde55f71f Extracting [================================================> ] 49.28MB/50.34MB Stderr: cb2bde55f71f Extracting [=================================================> ] 50.33MB/50.34MB Stderr: cb2bde55f71f Extracting [==================================================>] 50.34MB/50.34MB Stderr: cb2bde55f71f Pull complete Stderr: 9d0e0719fbe0 Extracting [=> ] 32.77kB/1.261MB Stderr: 9d0e0719fbe0 Extracting [==================================================>] 1.261MB/1.261MB Stderr: 9d0e0719fbe0 Extracting [==================================================>] 1.261MB/1.261MB Stderr: 9d0e0719fbe0 Pull complete Stderr: 6f063dbd7a5d Download complete Stderr: 6f063dbd7a5d Extracting [==================================================>] 446B/446B Stderr: 6f063dbd7a5d Extracting [==================================================>] 446B/446B Stderr: f9e3e3d8f042 Verifying Checksum Stderr: f9e3e3d8f042 Download complete Stderr: a39ef2f62dc8 Downloading [> ] 3.29kB/209.4kB Stderr: 6f063dbd7a5d Pull complete Stderr: f9e3e3d8f042 Extracting [==================================================>] 116B/116B Stderr: f9e3e3d8f042 Extracting [==================================================>] 116B/116B Stderr: f9e3e3d8f042 Pull complete Stderr: a39ef2f62dc8 Verifying Checksum Stderr: a39ef2f62dc8 Download complete Stderr: a39ef2f62dc8 Extracting [=======> ] 32.77kB/209.4kB Stderr: a39ef2f62dc8 Extracting [==================================================>] 209.4kB/209.4kB Stderr: a39ef2f62dc8 Extracting [==================================================>] 209.4kB/209.4kB Stderr: a39ef2f62dc8 Pull complete Stderr: 9a21c6b23f0e Downloading [> ] 15.58kB/794kB Stderr: efeb7b313b67 Downloading [=> ] 15.58kB/458.8kB Stderr: 6fef65209747 Downloading [> ] 375.1kB/36.34MB Stderr: 9a21c6b23f0e Downloading [===================> ] 309.5kB/794kB Stderr: efeb7b313b67 Downloading [=================================> ] 309.5kB/458.8kB Stderr: efeb7b313b67 Verifying Checksum Stderr: efeb7b313b67 Download complete Stderr: 9a21c6b23f0e Verifying Checksum Stderr: 9a21c6b23f0e Download complete Stderr: 9a21c6b23f0e Extracting [==> ] 32.77kB/794kB Stderr: 6fef65209747 Downloading [==> ] 1.506MB/36.34MB Stderr: 9a21c6b23f0e Extracting [========================> ] 393.2kB/794kB Stderr: 6fef65209747 Downloading [=====> ] 4.143MB/36.34MB Stderr: 9a21c6b23f0e Extracting [===================================> ] 557.1kB/794kB Stderr: 6fef65209747 Downloading [=============> ] 10.17MB/36.34MB Stderr: 9a21c6b23f0e Extracting [==================================================>] 794kB/794kB Stderr: 6fef65209747 Downloading [===========================> ] 20.35MB/36.34MB Stderr: 9a21c6b23f0e Pull complete Stderr: efeb7b313b67 Extracting [===> ] 32.77kB/458.8kB Stderr: 6fef65209747 Downloading [============================================> ] 32.03MB/36.34MB Stderr: efeb7b313b67 Extracting [==================================================>] 458.8kB/458.8kB Stderr: efeb7b313b67 Extracting [==================================================>] 458.8kB/458.8kB Stderr: 6fef65209747 Verifying Checksum Stderr: 6fef65209747 Download complete Stderr: efeb7b313b67 Pull complete Stderr: 6fef65209747 Extracting [> ] 393.2kB/36.34MB Stderr: 6fef65209747 Extracting [=> ] 786.4kB/36.34MB Stderr: 6fef65209747 Extracting [=> ] 1.18MB/36.34MB Stderr: 6fef65209747 Extracting [==> ] 1.573MB/36.34MB Stderr: 6fef65209747 Extracting [==> ] 1.966MB/36.34MB Stderr: 6fef65209747 Extracting [===> ] 2.359MB/36.34MB Stderr: 6fef65209747 Extracting [===> ] 2.753MB/36.34MB Stderr: 6fef65209747 Extracting [====> ] 3.539MB/36.34MB Stderr: 6fef65209747 Extracting [=====> ] 3.932MB/36.34MB Stderr: 6fef65209747 Extracting [=====> ] 4.325MB/36.34MB Stderr: 6fef65209747 Extracting [=======> ] 5.505MB/36.34MB Stderr: 6fef65209747 Extracting [========> ] 5.898MB/36.34MB Stderr: 6fef65209747 Extracting [========> ] 6.291MB/36.34MB Stderr: 6fef65209747 Extracting [=========> ] 6.685MB/36.34MB Stderr: 6fef65209747 Extracting [=========> ] 7.078MB/36.34MB Stderr: 6fef65209747 Extracting [==========> ] 7.471MB/36.34MB Stderr: 6fef65209747 Extracting [==========> ] 7.864MB/36.34MB Stderr: 6fef65209747 Extracting [===========> ] 8.258MB/36.34MB Stderr: 6fef65209747 Extracting [===========> ] 8.651MB/36.34MB Stderr: 6fef65209747 Extracting [============> ] 9.044MB/36.34MB Stderr: 6fef65209747 Extracting [=============> ] 9.83MB/36.34MB Stderr: 6fef65209747 Extracting [=================> ] 12.98MB/36.34MB Stderr: 6fef65209747 Extracting [==================> ] 13.37MB/36.34MB Stderr: 6fef65209747 Extracting [===================> ] 14.16MB/36.34MB Stderr: 6fef65209747 Extracting [====================> ] 14.94MB/36.34MB Stderr: 6fef65209747 Extracting [=====================> ] 15.34MB/36.34MB Stderr: 6fef65209747 Extracting [=====================> ] 15.73MB/36.34MB Stderr: 6fef65209747 Extracting [======================> ] 16.12MB/36.34MB Stderr: 6fef65209747 Extracting [=======================> ] 16.91MB/36.34MB Stderr: 6fef65209747 Extracting [=======================> ] 17.3MB/36.34MB Stderr: 6fef65209747 Extracting [=========================> ] 18.87MB/36.34MB Stderr: 6fef65209747 Extracting [================================> ] 23.59MB/36.34MB Stderr: 6fef65209747 Extracting [=========================================> ] 30.28MB/36.34MB Stderr: 6fef65209747 Extracting [==================================================>] 36.34MB/36.34MB Stderr: 6fef65209747 Pull complete Stderr: 3d377e512a83 Verifying Checksum Stderr: 3d377e512a83 Download complete Stderr: 3d377e512a83 Extracting [==================================================>] 2.862kB/2.862kB Stderr: 3d377e512a83 Extracting [==================================================>] 2.862kB/2.862kB Stderr: 3d377e512a83 Pull complete Stderr: azurite1 Pulled Setup HDFS Command:[docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --verbose up -d] Stderr:time="2025-04-04T18:13:53Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network rootteststorageiceberg-gw0_default Creating Stderr: Network rootteststorageiceberg-gw0_default Created Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Creating Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Created Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Starting Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Started Stderr:time="2025-04-04T18:13:53Z" level=debug msg="otel error" error="" Stderr:time="2025-04-04T18:13:53Z" level=debug msg="otel error" error="" get_instance_ip instance_name=hdfs1 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-hdfs1-1/json HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:04 GMT, Fri, 04 Apr 2025 18:14:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:04 GMT, Fri, 04 Apr 2025 18:14:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:07 GMT, Fri, 04 Apr 2025 18:14:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:07 GMT, Fri, 04 Apr 2025 18:14:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:09 GMT, Fri, 04 Apr 2025 18:14:09 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:09 GMT, Fri, 04 Apr 2025 18:14:09 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:10 GMT, Fri, 04 Apr 2025 18:14:10 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:10 GMT, Fri, 04 Apr 2025 18:14:10 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:12 GMT, Fri, 04 Apr 2025 18:14:12 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:12 GMT, Fri, 04 Apr 2025 18:14:12 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:13 GMT, Fri, 04 Apr 2025 18:14:13 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:13 GMT, Fri, 04 Apr 2025 18:14:13 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to find datanode, suggest to check cluster health."}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:15 GMT, Fri, 04 Apr 2025 18:14:15 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:15 GMT, Fri, 04 Apr 2025 18:14:15 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to find datanode, suggest to check cluster health."}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:16 GMT, Fri, 04 Apr 2025 18:14:16 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:16 GMT, Fri, 04 Apr 2025 18:14:16 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 307 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 28 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 27 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:20 GMT, Fri, 04 Apr 2025 18:14:20 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:20 GMT, Fri, 04 Apr 2025 18:14:20 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 25 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 24 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:23 GMT, Fri, 04 Apr 2025 18:14:23 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:23 GMT, Fri, 04 Apr 2025 18:14:23 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 22 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 21 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:26 GMT, Fri, 04 Apr 2025 18:14:26 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:26 GMT, Fri, 04 Apr 2025 18:14:26 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 19 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 18 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:29 GMT, Fri, 04 Apr 2025 18:14:29 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:29 GMT, Fri, 04 Apr 2025 18:14:29 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 15 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 14 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:32 GMT, Fri, 04 Apr 2025 18:14:32 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:32 GMT, Fri, 04 Apr 2025 18:14:32 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 12 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 11 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:35 GMT, Fri, 04 Apr 2025 18:14:35 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:35 GMT, Fri, 04 Apr 2025 18:14:35 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 8 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:38 GMT, Fri, 04 Apr 2025 18:14:38 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:38 GMT, Fri, 04 Apr 2025 18:14:38 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 5 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:41 GMT, Fri, 04 Apr 2025 18:14:41 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:41 GMT, Fri, 04 Apr 2025 18:14:41 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 3 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 2 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:44 GMT, Fri, 04 Apr 2025 18:14:44 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:44 GMT, Fri, 04 Apr 2025 18:14:44 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 0 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} unexpected response_data.status_code 403 != 201 CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/somefilewithrandomname222', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/somefilewithrandomname222', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} Connected to HDFS and SafeMode disabled! Trying to create Minio instance by command docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d Command:[docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] Stderr:time="2025-04-04T18:14:48Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Volume "rootteststorageiceberg-gw0_data1-1" Creating Stderr: Volume "rootteststorageiceberg-gw0_data1-1" Created Stderr:time="2025-04-04T18:14:48Z" level=warning msg="Found orphan containers ([rootteststorageiceberg-gw0-hdfs1-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Creating Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Creating Stderr: proxy2 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested Stderr: proxy1 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Created Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Created Stderr: Container rootteststorageiceberg-gw0-minio1-1 Creating Stderr: Container rootteststorageiceberg-gw0-resolver-1 Creating Stderr: Container rootteststorageiceberg-gw0-minio1-1 Created Stderr: Container rootteststorageiceberg-gw0-resolver-1 Created Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Starting Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Starting Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Started Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Started Stderr: Container rootteststorageiceberg-gw0-minio1-1 Starting Stderr: Container rootteststorageiceberg-gw0-resolver-1 Starting Stderr: Container rootteststorageiceberg-gw0-resolver-1 Started Stderr: Container rootteststorageiceberg-gw0-minio1-1 Started Stderr:time="2025-04-04T18:14:49Z" level=debug msg="otel error" error="" Stderr:time="2025-04-04T18:14:49Z" level=debug msg="otel error" error="" Trying to connect to Minio... get_instance_ip instance_name=minio1 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-minio1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=proxy1 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-proxy1-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.2.5:9001 Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (2): 172.16.2.5:9001 Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (3): 172.16.2.5:9001 Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (4): 172.16.2.5:9001 Can't connect to Minio: HTTPConnectionPool(host='172.16.2.5', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Starting new HTTP connection (5): 172.16.2.5:9001 http://172.16.2.5:9001 "GET / HTTP/1.1" 200 0 Connected to Minio. http://172.16.2.5:9001 "GET /root?location= HTTP/1.1" 404 0 http://172.16.2.5:9001 "PUT /root HTTP/1.1" 200 0 S3 bucket 'root' created http://172.16.2.5:9001 "GET /root2?location= HTTP/1.1" 404 0 http://172.16.2.5:9001 "PUT /root2 HTTP/1.1" 200 0 S3 bucket 'root2' created Trying to create Azurite instance by command docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --verbose up -d Command:[docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --verbose up -d] Stderr:time="2025-04-04T18:14:50Z" level=trace msg="Docker Desktop integration not enabled" Stderr:time="2025-04-04T18:14:50Z" level=warning msg="Found orphan containers ([rootteststorageiceberg-gw0-minio1-1 rootteststorageiceberg-gw0-resolver-1 rootteststorageiceberg-gw0-proxy2-1 rootteststorageiceberg-gw0-proxy1-1 rootteststorageiceberg-gw0-hdfs1-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Creating Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Created Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Starting Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Started Stderr:time="2025-04-04T18:14:50Z" level=debug msg="otel error" error="" Stderr:time="2025-04-04T18:14:50Z" level=debug msg="otel error" error="" Trying to connect to Azurite Request URL: 'http://127.0.0.1:30000/devstoreaccount1/?restype=REDACTED&comp=REDACTED' Request method: 'GET' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3a2b048-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request Starting new HTTP connection (1): 127.0.0.1:30000 http://127.0.0.1:30000 "GET /devstoreaccount1/?restype=account&comp=properties HTTP/1.1" 200 0 Response status: 200 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'x-ms-client-request-id': 'b3a2b048-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '4db9a46b-6801-421b-9156-84d49bc9f707' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'x-ms-sku-name': 'REDACTED' 'x-ms-account-kind': 'REDACTED' 'x-ms-is-hns-enabled': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' {'client_request_id': 'b3a2b048-1180-11f0-918b-0242ac110002', 'request_id': '4db9a46b-6801-421b-9156-84d49bc9f707', 'version': '2025-05-05', 'date': datetime.datetime(2025, 4, 4, 18, 14, 51, tzinfo=datetime.timezone.utc), 'sku_name': 'Standard_RAGRS', 'account_kind': 'StorageV2', 'is_hns_enabled': False} Request URL: 'http://127.0.0.1:30000/devstoreaccount1/?comp=REDACTED&prefix=REDACTED&include=REDACTED' Request method: 'GET' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3a7c326-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request http://127.0.0.1:30000 "GET /devstoreaccount1/?comp=list&prefix=azurite-container&include= HTTP/1.1" 200 None Response status: 200 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'x-ms-client-request-id': 'b3a7c326-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'd10d5a96-65ba-4a43-82e5-7833d18ca17b' 'x-ms-version': 'REDACTED' 'content-type': 'application/xml' 'Date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Transfer-Encoding': 'chunked' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/azurite-container?restype=REDACTED' Request method: 'GET' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3a9214e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request http://127.0.0.1:30000 "GET /devstoreaccount1/azurite-container?restype=container HTTP/1.1" 404 None Response status: 404 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'x-ms-error-code': 'ContainerNotFound' 'x-ms-request-id': 'b487e38f-c9b6-4ad5-8947-fad625fc8bba' 'content-type': 'application/xml' 'Date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Transfer-Encoding': 'chunked' azurite container 'azurite-container' doesn't exist, creating it Request URL: 'http://127.0.0.1:30000/devstoreaccount1/azurite-container?restype=REDACTED' Request method: 'PUT' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3aa1a36-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request http://127.0.0.1:30000 "PUT /devstoreaccount1/azurite-container?restype=container HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x23697E5161FF9A0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:51 GMT' 'x-ms-client-request-id': 'b3aa1a36-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'a7bb09c1-1043-4c39-9c5a-13b9497ae714' 'x-ms-version': 'REDACTED' 'Date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --project-name rootteststorageiceberg-gw0 --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --project-name rootteststorageiceberg-gw0 --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml up -d --no-recreate] Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Running Stderr: Container rootteststorageiceberg-gw0-node2-1 Creating Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Running Stderr: Container rootteststorageiceberg-gw0-node3-1 Creating Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Running Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Running Stderr: Container rootteststorageiceberg-gw0-resolver-1 Running Stderr: Container rootteststorageiceberg-gw0-minio1-1 Running Stderr: Container rootteststorageiceberg-gw0-node1-1 Creating Stderr: Container rootteststorageiceberg-gw0-node1-1 Created Stderr: Container rootteststorageiceberg-gw0-node2-1 Created Stderr: Container rootteststorageiceberg-gw0-node3-1 Created Stderr: Container rootteststorageiceberg-gw0-node2-1 Starting Stderr: Container rootteststorageiceberg-gw0-node3-1 Starting Stderr: Container rootteststorageiceberg-gw0-node1-1 Starting Stderr: Container rootteststorageiceberg-gw0-node3-1 Started Stderr: Container rootteststorageiceberg-gw0-node2-1 Started Stderr: Container rootteststorageiceberg-gw0-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.2.10... http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.2.8... http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/6fa617605cb6f43203da96e10d8eea868705d7ee7253f90051dd0adb7f64faf6/json HTTP/1.1" 200 None ClickHouse node2 started get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node3, ip: 172.16.2.9... http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/45983ddac6be8f632a73c266335998c606241ae86ac310435ce6d8560b6d3d10/json HTTP/1.1" 200 None ClickHouse node3 started http://172.16.2.5:9001 "PUT /root?policy= HTTP/1.1" 204 0 http://172.16.2.5:9001 "GET /root-with-auth?location= HTTP/1.1" 404 0 http://172.16.2.5:9001 "PUT /root-with-auth HTTP/1.1" 200 0 S3 bucket created Command to send: r u SparkConf rj e Answer received: !ycorg.apache.spark.SparkConf Command to send: i org.apache.spark.SparkConf bTrue e Answer received: !yro26 Command to send: c o26 set sspark.app.name sspark_test e Answer received: !yro27 Command to send: c o26 set sspark.master slocal e Answer received: !yro28 Command to send: c o26 set sspark.sql.catalog.spark_catalog sorg.apache.iceberg.spark.SparkSessionCatalog e Answer received: !yro29 Command to send: c o26 set sspark.sql.catalog.local sorg.apache.iceberg.spark.SparkCatalog e Answer received: !yro30 Command to send: c o26 set sspark.sql.catalog.spark_catalog.type shadoop e Answer received: !yro31 Command to send: c o26 set sspark.sql.catalog.spark_catalog.warehouse s/iceberg_data e Answer received: !yro32 Command to send: c o26 set sspark.sql.extensions sorg.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions e Answer received: !yro33 Command to send: c o26 get sspark.executor.allowSparkContext sfalse e Answer received: !ysfalse Command to send: c o26 contains sspark.serializer.objectStreamReset e Answer received: !ybfalse Command to send: c o26 set sspark.serializer.objectStreamReset s100 e Answer received: !yro34 Command to send: c o26 contains sspark.rdd.compress e Answer received: !ybfalse Command to send: c o26 set sspark.rdd.compress sTrue e Answer received: !yro35 Command to send: c o26 contains sspark.master e Answer received: !ybtrue Command to send: c o26 contains sspark.app.name e Answer received: !ybtrue Command to send: c o26 contains sspark.master e Answer received: !ybtrue Command to send: c o26 get sspark.master e Answer received: !yslocal Command to send: c o26 contains sspark.app.name e Answer received: !ybtrue Command to send: c o26 get sspark.app.name e Answer received: !ysspark_test Command to send: c o26 contains sspark.home e Answer received: !ybfalse Command to send: c o26 getAll e Answer received: !yto36 Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i0 e Answer received: !yro37 Command to send: c o37 _1 e Answer received: !ysspark.master Command to send: c o37 _2 e Answer received: !yslocal Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i1 e Answer received: !yro38 Command to send: c o38 _1 e Answer received: !ysspark.sql.catalog.local Command to send: c o38 _2 e Answer received: !ysorg.apache.iceberg.spark.SparkCatalog Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i2 e Answer received: !yro39 Command to send: c o39 _1 e Answer received: !ysspark.app.name Command to send: c o39 _2 e Answer received: !ysspark_test Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i3 e Answer received: !yro40 Command to send: c o40 _1 e Answer received: !ysspark.rdd.compress Command to send: c o40 _2 e Answer received: !ysTrue Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i4 e Answer received: !yro41 Command to send: c o41 _1 e Answer received: !ysspark.serializer.objectStreamReset Command to send: c o41 _2 e Answer received: !ys100 Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i5 e Answer received: !yro42 Command to send: c o42 _1 e Answer received: !ysspark.sql.catalog.spark_catalog.warehouse Command to send: c o42 _2 e Answer received: !ys/iceberg_data Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i6 e Answer received: !yro43 Command to send: c o43 _1 e Answer received: !ysspark.submit.pyFiles Command to send: c o43 _2 e Answer received: !ys Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i7 e Answer received: !yro44 Command to send: c o44 _1 e Answer received: !ysspark.sql.catalog.spark_catalog Command to send: c o44 _2 e Answer received: !ysorg.apache.iceberg.spark.SparkSessionCatalog Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i8 e Answer received: !yro45 Command to send: c o45 _1 e Answer received: !ysspark.submit.deployMode Command to send: c o45 _2 e Answer received: !ysclient Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i9 e Answer received: !yro46 Command to send: c o46 _1 e Answer received: !ysspark.app.submitTime Command to send: c o46 _2 e Answer received: !ys1743790268520 Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i10 e Answer received: !yro47 Command to send: c o47 _1 e Answer received: !ysspark.sql.catalog.spark_catalog.type Command to send: c o47 _2 e Answer received: !yshadoop Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i11 e Answer received: !yro48 Command to send: c o48 _1 e Answer received: !ysspark.ui.showConsoleProgress Command to send: c o48 _2 e Answer received: !ystrue Command to send: a e o36 e Answer received: !yi13 Command to send: a g o36 i12 e Answer received: !yro49 Command to send: c o49 _1 e Answer received: !ysspark.sql.extensions Command to send: c o49 _2 e Answer received: !ysorg.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions Command to send: a e o36 e Answer received: !yi13 Command to send: r u JavaSparkContext rj e Answer received: !ycorg.apache.spark.api.java.JavaSparkContext Command to send: i org.apache.spark.api.java.JavaSparkContext ro26 e Answer received: !yro50 Command to send: c o50 sc e Answer received: !yro51 Command to send: c o51 conf e Answer received: !yro52 Command to send: r u PythonAccumulatorV2 rj e Answer received: !ycorg.apache.spark.api.python.PythonAccumulatorV2 Command to send: i org.apache.spark.api.python.PythonAccumulatorV2 s127.0.0.1 i63455 s72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb e Answer received: !yro53 Command to send: c o50 sc e Answer received: !yro54 Command to send: c o54 register ro53 e Answer received: !yv Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils isEncryptionEnabled e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils isEncryptionEnabled ro50 e Answer received: !ybfalse Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout ro50 e Answer received: !yL15 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils getSparkBufferSize e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils getSparkBufferSize ro50 e Answer received: !yi65536 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.SparkFiles rj e Answer received: !ycorg.apache.spark.SparkFiles Command to send: r m org.apache.spark.SparkFiles getRootDirectory e Answer received: !ym Command to send: c z:org.apache.spark.SparkFiles getRootDirectory e Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/userFiles-4e28738d-ad67-4b1b-a2fe-2e49ce0c0ab3 Command to send: c o52 get sspark.submit.pyFiles s e Answer received: !ys Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.util rj e Answer received: !yp Command to send: r u org.apache.spark.util.Utils rj e Answer received: !ycorg.apache.spark.util.Utils Command to send: r m org.apache.spark.util.Utils getLocalDir e Answer received: !ym Command to send: c o50 sc e Answer received: !yro55 Command to send: c o55 conf e Answer received: !yro56 Command to send: c z:org.apache.spark.util.Utils getLocalDir ro56 e Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.util rj e Answer received: !yp Command to send: r u org.apache.spark.util.Utils rj e Answer received: !ycorg.apache.spark.util.Utils Command to send: r m org.apache.spark.util.Utils createTempDir e Answer received: !ym Command to send: c z:org.apache.spark.util.Utils createTempDir s/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 spyspark e Answer received: !yro57 Command to send: c o57 getAbsolutePath e Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/pyspark-48f963c5-770c-4b07-8168-9c89af79cee4 Command to send: c o52 get sspark.python.profile sfalse e Answer received: !ysfalse Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getDefaultSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getDefaultSession e Answer received: !yro58 Command to send: c o58 isDefined e Answer received: !ybfalse Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: c o50 sc e Answer received: !yro59 Command to send: i java.util.HashMap e Answer received: !yao60 Command to send: c o60 put sspark.app.name sspark_test e Answer received: !yn Command to send: c o60 put sspark.master slocal e Answer received: !yn Command to send: c o60 put sspark.sql.catalog.spark_catalog sorg.apache.iceberg.spark.SparkSessionCatalog e Answer received: !yn Command to send: c o60 put sspark.sql.catalog.local sorg.apache.iceberg.spark.SparkCatalog e Answer received: !yn Command to send: c o60 put sspark.sql.catalog.spark_catalog.type shadoop e Answer received: !yn Command to send: c o60 put sspark.sql.catalog.spark_catalog.warehouse s/iceberg_data e Answer received: !yn Command to send: c o60 put sspark.sql.extensions sorg.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions e Answer received: !yn Command to send: i org.apache.spark.sql.SparkSession ro59 ro60 e Answer received: !yro61 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession setDefaultSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession setDefaultSession ro61 e Answer received: !yv Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession setActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession setActiveSession ro61 e Answer received: !yv Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer?restype=REDACTED' Request method: 'PUT' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b4506e7c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer?restype=container HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BE2F0449B7E2A0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:52 GMT' 'x-ms-client-request-id': 'b4506e7c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '5d0e4080-63b5-44a1-b8df-296361418435' 'x-ms-version': 'REDACTED' 'Date': 'Fri, 04 Apr 2025 18:14:52 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' ------------------------------ Captured log setup ------------------------------ 2025-04-04 18:11:06 [ 670 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:122, run_and_check) 2025-04-04 18:11:06 [ 670 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-04 18:11:06 [ 670 ] DEBUG : No running containers (conftest.py:96, cleanup_environment) 2025-04-04 18:11:06 [ 670 ] DEBUG : Pruning Docker networks (conftest.py:98, cleanup_environment) 2025-04-04 18:11:06 [ 670 ] DEBUG : Command:[docker network prune --force] (cluster.py:122, run_and_check) 2025-04-04 18:11:06 [ 670 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:122, run_and_check) 2025-04-04 18:11:06 [ 670 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:146, run_and_check) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV HOSTNAME 5495df948c8e (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV SHLVL 0 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV HOME /root (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV OLDPWD / (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_HELPER_TAG 5dc43a6382f0 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PYTHONUNBUFFERED 1 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV UBSAN_OPTIONS print_stacktrace=1 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PYTEST_ADDOPTS --dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 'test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3]' test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side test_ssh_keys_authentication/test.py::test_ecdsa test_ssh_keys_authentication/test.py::test_ed25519 test_ssh_keys_authentication/test.py::test_key_with_passphrase test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase test_ssh_keys_authentication/test.py::test_rsa test_ssh_keys_authentication/test.py::test_wrong_key test_storage_azure_blob_storage/test_check_after_upload.py::test_simple test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection test_storage_azure_blob_storage/test_cluster.py::test_count test_storage_azure_blob_storage/test_cluster.py::test_format_detection test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster test_storage_azure_blob_storage/test_cluster.py::test_select_all test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards test_storage_azure_blob_storage/test_cluster.py::test_union_all test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards test_storage_hudi/test.py::test_multiple_hudi_files test_storage_hudi/test.py::test_single_hudi_file test_storage_hudi/test.py::test_types 'test_storage_iceberg/test.py::test_cluster_table_function[azure-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[azure-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-2]' 'test_storage_iceberg/test.py::test_delete_files[azure-1]' 'test_storage_iceberg/test.py::test_delete_files[azure-2]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-1]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-2]' 'test_storage_iceberg/test.py::test_delete_files[local-1]' 'test_storage_iceberg/test.py::test_delete_files[local-2]' 'test_storage_iceberg/test.py::test_delete_files[s3-1]' 'test_storage_iceberg/test.py::test_delete_files[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2]' 'test_storage_iceberg/test.py::test_filesystem_cache[s3]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-2]' 'test_storage_iceberg/test.py::test_partition_by[azure-1]' 'test_storage_iceberg/test.py::test_partition_by[azure-2]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-1]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-2]' 'test_storage_iceberg/test.py::test_partition_by[local-1]' 'test_storage_iceberg/test.py::test_partition_by[local-2]' 'test_storage_iceberg/test.py::test_partition_by[s3-1]' 'test_storage_iceberg/test.py::test_partition_by[s3-2]' test_storage_iceberg/test.py::test_restart_broken_s3 'test_storage_iceberg/test.py::test_row_based_deletes[azure]' 'test_storage_iceberg/test.py::test_row_based_deletes[hdfs]' -vvv (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV COMPOSE_HTTP_TIMEOUT 600 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_KERBERIZED_HADOOP_TAG latest (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_CHANNEL stable (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_CLIENT_TIMEOUT 300 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PWD /ClickHouse/tests/integration (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV TZ Etc/UTC (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV DOCKER_BASE_TAG 6712d5cc610d (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV LC_CTYPE C.UTF-8 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV INTEGRATION_TESTS_RUN_ID 0 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV WORKER_FREE_PORTS 30000 30001 30002 30003 30004 30005 30006 30007 30008 30009 30010 30011 30012 30013 30014 30015 30016 30017 30018 30019 30020 30021 30022 30023 30024 30025 30026 30027 30028 30029 30030 30031 30032 30033 30034 30035 30036 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PYTEST_XDIST_TESTRUNUID 269aa778434d4c4cac33147c10e9e07e (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PYTEST_XDIST_WORKER gw0 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PYTEST_XDIST_WORKER_COUNT 10 (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : ENV PYTEST_CURRENT_TEST test_storage_iceberg/test.py::test_cluster_table_function[azure-1] (setup) (cluster.py:450, __init__) 2025-04-04 18:11:06 [ 670 ] DEBUG : CLUSTER INIT base_config_dir:/clickhouse-config (cluster.py:774, __init__) 2025-04-04 18:11:08 [ 670 ] DEBUG : GatewayClient.address is deprecated and will be removed in version 1.0. Use GatewayParameters instead. (java_gateway.py:163, deprecated) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: A 72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.SparkConf e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.api.java.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.api.python.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.ml.python.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.mllib.api.python.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.resource.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.sql.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.sql.api.python.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj org.apache.spark.sql.hive.* e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: j i rj scala.Tuple2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: r u SparkConf rj e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.SparkConf (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: i org.apache.spark.SparkConf bTrue e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro0 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 set sspark.app.name sspark_test e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro1 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 set sspark.master slocal e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro2 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 contains sspark.serializer.objectStreamReset e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 set sspark.serializer.objectStreamReset s100 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro3 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 contains sspark.rdd.compress e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 set sspark.rdd.compress sTrue e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro4 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 contains sspark.master e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 contains sspark.app.name e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 contains sspark.master e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 get sspark.master e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yslocal (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 contains sspark.app.name e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 get sspark.app.name e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark_test (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 contains sspark.home e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o0 getAll e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yto5 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i0 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro6 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o6 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.master (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o6 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yslocal (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro7 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o7 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.app.name (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o7 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark_test (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o8 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.rdd.compress (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o8 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysTrue (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i3 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro9 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o9 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.serializer.objectStreamReset (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o9 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ys100 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i4 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro10 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o10 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.submit.pyFiles (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o10 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ys (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro11 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o11 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.app.submitTime (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o11 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ys1743790268520 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i6 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro12 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o12 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.submit.deployMode (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o12 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysclient (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a g o5 i7 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yro13 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o13 _1 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ysspark.ui.showConsoleProgress (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: c o13 _2 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ystrue (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: a e o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !yi8 (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: r u JavaSparkContext rj e (clientserver.py:501, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.java.JavaSparkContext (clientserver.py:512, send_command) 2025-04-04 18:11:08 [ 670 ] DEBUG : Command to send: i org.apache.spark.api.java.JavaSparkContext ro0 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: A 72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: m d o1 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: m d o2 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: m d o3 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: m d o4 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: m d o5 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yro14 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c o14 sc e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yro15 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c o15 conf e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yro16 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u PythonAccumulatorV2 rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonAccumulatorV2 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: i org.apache.spark.api.python.PythonAccumulatorV2 s127.0.0.1 i57765 s72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yro17 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c o14 sc e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yro18 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c o18 register ro17 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils isEncryptionEnabled e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils isEncryptionEnabled ro14 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout ro14 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yL15 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils getSparkBufferSize e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils getSparkBufferSize ro14 e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yi65536 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.SparkFiles rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.SparkFiles (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.SparkFiles getRootDirectory e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.SparkFiles getRootDirectory e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/userFiles-a3d2ae26-afd0-4044-98de-4e80c51814e2 (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: c o16 get sspark.submit.pyFiles s e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ys (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util.Utils rj e (clientserver.py:501, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.util.Utils (clientserver.py:512, send_command) 2025-04-04 18:11:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.util.Utils getLocalDir e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o14 sc e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yro19 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o19 conf e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yro20 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.util.Utils getLocalDir ro20 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util.Utils rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.util.Utils (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.util.Utils createTempDir e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.util.Utils createTempDir s/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 spyspark e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yro21 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o21 getAbsolutePath e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/pyspark-1321940d-cd49-407c-8d06-c6d73562e312 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o16 get sspark.python.profile sfalse e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ysfalse (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yro22 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o22 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o14 sc e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yro23 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yao24 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o24 put sspark.app.name sspark_test e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o24 put sspark.master slocal e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: i org.apache.spark.sql.SparkSession ro23 ro24 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yro25 (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession setDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession setDefaultSession ro25 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession setActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession setActiveSession ro25 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c o14 stop e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession clearDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession clearDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession clearActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession clearActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log (cluster.py:1729, add_instance) 2025-04-04 18:11:10 [ 670 ] DEBUG : HDFS BASE CMD:{self.base_hdfs_cmd)} (cluster.py:1269, setup_hdfs_cmd) 2025-04-04 18:11:10 [ 670 ] DEBUG : Cluster name: project_name:rootteststorageiceberg-gw0. Added instance name:node1 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env', '--project-name', 'rootteststorageiceberg-gw0', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ (cluster.py:2025, add_instance) 2025-04-04 18:11:10 [ 670 ] DEBUG : clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log (cluster.py:1729, add_instance) 2025-04-04 18:11:10 [ 670 ] DEBUG : Cluster name: project_name:rootteststorageiceberg-gw0. Added instance name:node2 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env', '--project-name', 'rootteststorageiceberg-gw0', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ (cluster.py:2025, add_instance) 2025-04-04 18:11:10 [ 670 ] DEBUG : clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log (cluster.py:1729, add_instance) 2025-04-04 18:11:10 [ 670 ] DEBUG : Cluster name: project_name:rootteststorageiceberg-gw0. Added instance name:node3 tag:6712d5cc610d base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env', '--project-name', 'rootteststorageiceberg-gw0', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ (cluster.py:2025, add_instance) 2025-04-04 18:11:10 [ 670 ] INFO : Starting cluster... (test.py:110, started_cluster) 2025-04-04 18:11:10 [ 670 ] INFO : Running tests in /ClickHouse/tests/integration/test_storage_iceberg/test.py (cluster.py:2793, start) 2025-04-04 18:11:10 [ 670 ] DEBUG : Cluster start called. is_up=False (cluster.py:2800, start) 2025-04-04 18:11:10 [ 670 ] DEBUG : Docker networks for project rootteststorageiceberg-gw0 are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:11:10 [ 670 ] DEBUG : Docker containers for project rootteststorageiceberg-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:11:10 [ 670 ] DEBUG : Docker volumes for project rootteststorageiceberg-gw0 are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:11:10 [ 670 ] DEBUG : Cleanup called (cluster.py:894, cleanup) 2025-04-04 18:11:10 [ 670 ] DEBUG : Docker networks for project rootteststorageiceberg-gw0 are NETWORK ID NAME DRIVER SCOPE (cluster.py:873, print_all_docker_pieces) 2025-04-04 18:11:10 [ 670 ] DEBUG : Docker containers for project rootteststorageiceberg-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:881, print_all_docker_pieces) 2025-04-04 18:11:10 [ 670 ] DEBUG : Docker volumes for project rootteststorageiceberg-gw0 are DRIVER VOLUME NAME (cluster.py:889, print_all_docker_pieces) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command:[docker container list --all --filter name='^/rootteststorageiceberg-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:122, run_and_check) 2025-04-04 18:11:10 [ 670 ] DEBUG : Unstopped containers: {} (cluster.py:908, cleanup) 2025-04-04 18:11:10 [ 670 ] DEBUG : No running containers for project: rootteststorageiceberg-gw0 (cluster.py:922, cleanup) 2025-04-04 18:11:10 [ 670 ] DEBUG : Trying to prune unused networks... (cluster.py:928, cleanup) 2025-04-04 18:11:10 [ 670 ] DEBUG : Trying to prune unused images... (cluster.py:944, cleanup) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command:[docker image prune -f] (cluster.py:122, run_and_check) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o0 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o6 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o7 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o8 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o9 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o10 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o11 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o12 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o13 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o15 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o18 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o19 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o20 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command to send: m d o24 e (clientserver.py:501, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:10 [ 670 ] DEBUG : Stderr:Error response from daemon: a prune operation is already running (cluster.py:148, run_and_check) 2025-04-04 18:11:10 [ 670 ] DEBUG : Exitcode:1 (cluster.py:150, run_and_check) 2025-04-04 18:11:10 [ 670 ] DEBUG : Trying to prune unused volumes... (cluster.py:953, cleanup) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:122, run_and_check) 2025-04-04 18:11:10 [ 670 ] DEBUG : Stdout:1 (cluster.py:146, run_and_check) 2025-04-04 18:11:10 [ 670 ] DEBUG : Volumes pruned: 1 (cluster.py:958, cleanup) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup directory for instance: node1 (cluster.py:2813, start) 2025-04-04 18:11:10 [ 670 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/query_log.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/cluster.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/named_collections.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/filesystem_caches.xml'] to /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/configs/config.d (cluster.py:4752, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/database (cluster.py:4769, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/logs (cluster.py:4780, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" (cluster.py:4864, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup directory for instance: node2 (cluster.py:2813, start) 2025-04-04 18:11:10 [ 670 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/query_log.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/cluster.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/named_collections.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/filesystem_caches.xml'] to /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/configs/config.d (cluster.py:4752, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/database (cluster.py:4769, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/logs (cluster.py:4780, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" (cluster.py:4864, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup directory for instance: node3 (cluster.py:2813, start) 2025-04-04 18:11:10 [ 670 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4639, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Create directory for common tests configuration (cluster.py:4644, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Copy common configuration from helpers (cluster.py:4664, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Generate and write macros file (cluster.py:4716, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/query_log.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/cluster.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/named_collections.xml', '/ClickHouse/tests/integration/test_storage_iceberg/configs/config.d/filesystem_caches.xml'] to /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/configs/config.d (cluster.py:4752, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/database (cluster.py:4769, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/logs (cluster.py:4780, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" (cluster.py:4864, create_dir) 2025-04-04 18:11:10 [ 670 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'HDFS_HOST': 'hdfs1', 'HDFS_NAME_PORT': '50070', 'HDFS_DATA_PORT': '50075', 'HDFS_LOGS': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/hdfs/logs', 'HDFS_FS': 'bind', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/resolver', 'RESOLVER_LOGS_FS': 'bind', 'AZURITE_PORT': '30000', 'AZURITE_STORAGE_ACCOUNT_URL': 'http://azurite1:30000/devstoreaccount1', 'AZURITE_CONNECTION_STRING': 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:30000/devstoreaccount1;'} stored in /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env (cluster.py:97, _create_env_file) 2025-04-04 18:11:10 [ 670 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-04 18:11:10 [ 670 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-04 18:11:10 [ 670 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-04-04 18:11:10 [ 670 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-04-04 18:11:10 [ 670 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-04-04 18:11:10 [ 670 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --project-name rootteststorageiceberg-gw0 --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml pull] (cluster.py:122, run_and_check) 2025-04-04 18:11:11 [ 670 ] DEBUG : Command to send: m d o21 e (clientserver.py:501, send_command) 2025-04-04 18:11:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:11 [ 670 ] DEBUG : Command to send: m d o22 e (clientserver.py:501, send_command) 2025-04-04 18:11:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:11:11 [ 670 ] DEBUG : Command to send: m d o23 e (clientserver.py:501, send_command) 2025-04-04 18:11:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: node1 Skipped - Image is already being pulled by node2 (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: node3 Skipped - Image is already being pulled by node2 (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: proxy2 Skipped - Image is already being pulled by proxy1 (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: azurite1 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: hdfs1 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: minio1 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: node2 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: resolver Pulling (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: proxy1 Pulling (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: minio1 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: node2 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: hdfs1 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: proxy1 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: resolver Pulled (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6f063dbd7a5d Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f9e3e3d8f042 Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 3d377e512a83 Pulling fs layer (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6f063dbd7a5d Waiting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 3d377e512a83 Waiting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f9e3e3d8f042 Waiting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Waiting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Waiting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Waiting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Waiting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Downloading [> ] 15.58kB/1.261MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Downloading [> ] 48.34kB/3.642MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Downloading [============> ] 309.5kB/1.261MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Downloading [> ] 506.1kB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Downloading [========> ] 588kB/3.642MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Downloading [==> ] 2.538MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Downloading [=================================> ] 2.407MB/3.642MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Extracting [> ] 65.54kB/3.642MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Downloading [=======> ] 7.109MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Extracting [=================================> ] 2.425MB/3.642MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f18232174bc9 Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Downloading [===============> ] 15.75MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Downloading [===========================> ] 27.93MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Downloading [========================================> ] 41.14MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [> ] 524.3kB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [======> ] 6.816MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [==============> ] 14.68MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [=====================> ] 22.02MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [=============================> ] 29.88MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [=====================================> ] 37.75MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [============================================> ] 45.09MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [==============================================> ] 47.19MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [===============================================> ] 48.23MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [================================================> ] 48.76MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [================================================> ] 49.28MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [=================================================> ] 50.33MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Extracting [==================================================>] 50.34MB/50.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: cb2bde55f71f Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Extracting [=> ] 32.77kB/1.261MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Extracting [==================================================>] 1.261MB/1.261MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Extracting [==================================================>] 1.261MB/1.261MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9d0e0719fbe0 Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6f063dbd7a5d Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6f063dbd7a5d Extracting [==================================================>] 446B/446B (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6f063dbd7a5d Extracting [==================================================>] 446B/446B (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f9e3e3d8f042 Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f9e3e3d8f042 Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Downloading [> ] 3.29kB/209.4kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6f063dbd7a5d Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f9e3e3d8f042 Extracting [==================================================>] 116B/116B (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f9e3e3d8f042 Extracting [==================================================>] 116B/116B (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: f9e3e3d8f042 Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Extracting [=======> ] 32.77kB/209.4kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Extracting [==================================================>] 209.4kB/209.4kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Extracting [==================================================>] 209.4kB/209.4kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: a39ef2f62dc8 Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Downloading [> ] 15.58kB/794kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Downloading [=> ] 15.58kB/458.8kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Downloading [> ] 375.1kB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Downloading [===================> ] 309.5kB/794kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Downloading [=================================> ] 309.5kB/458.8kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Extracting [==> ] 32.77kB/794kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Downloading [==> ] 1.506MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Extracting [========================> ] 393.2kB/794kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Downloading [=====> ] 4.143MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Extracting [===================================> ] 557.1kB/794kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Downloading [=============> ] 10.17MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Extracting [==================================================>] 794kB/794kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Downloading [===========================> ] 20.35MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 9a21c6b23f0e Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Extracting [===> ] 32.77kB/458.8kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Downloading [============================================> ] 32.03MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Extracting [==================================================>] 458.8kB/458.8kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Extracting [==================================================>] 458.8kB/458.8kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: efeb7b313b67 Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [> ] 393.2kB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=> ] 786.4kB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=> ] 1.18MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [==> ] 1.573MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [==> ] 1.966MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [===> ] 2.359MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [===> ] 2.753MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [====> ] 3.539MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=====> ] 3.932MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=====> ] 4.325MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=======> ] 5.505MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [========> ] 5.898MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [========> ] 6.291MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=========> ] 6.685MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=========> ] 7.078MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [==========> ] 7.471MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [==========> ] 7.864MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [===========> ] 8.258MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [===========> ] 8.651MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [============> ] 9.044MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=============> ] 9.83MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=================> ] 12.98MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [==================> ] 13.37MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [===================> ] 14.16MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [====================> ] 14.94MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=====================> ] 15.34MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=====================> ] 15.73MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [======================> ] 16.12MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=======================> ] 16.91MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=======================> ] 17.3MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=========================> ] 18.87MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [================================> ] 23.59MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [=========================================> ] 30.28MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Extracting [==================================================>] 36.34MB/36.34MB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 6fef65209747 Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 3d377e512a83 Verifying Checksum (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 3d377e512a83 Download complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 3d377e512a83 Extracting [==================================================>] 2.862kB/2.862kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 3d377e512a83 Extracting [==================================================>] 2.862kB/2.862kB (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: 3d377e512a83 Pull complete (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: azurite1 Pulled (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Setup HDFS (cluster.py:3064, start) 2025-04-04 18:13:53 [ 670 ] DEBUG : Command:[docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --verbose up -d] (cluster.py:122, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:13:53Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: Network rootteststorageiceberg-gw0_default Creating (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: Network rootteststorageiceberg-gw0_default Created (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:13:53Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:13:53Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:13:53 [ 670 ] DEBUG : get_instance_ip instance_name=hdfs1 (cluster.py:2082, get_instance_ip) 2025-04-04 18:13:53 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-hdfs1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:13:53 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:13:53 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:13:53 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:53 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:13:54 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:13:54 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:13:54 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:54 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:13:55 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:13:55 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:13:55 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:55 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:13:56 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:13:56 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:13:56 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:56 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:13:57 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:13:57 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:13:57 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:57 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:13:58 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:13:58 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:13:58 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:58 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:13:59 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:13:59 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:13:59 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:13:59 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:14:00 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:00 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:00 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:00 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:14:01 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:01 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:01 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:01 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:14:02 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:02 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:02 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:02 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:14:03 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:03 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 203, in _new_conn sock = connection.create_connection( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/usr/local/lib/python3.10/dist-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 791, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 497, in _make_request conn.request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 395, in request self.endheaders() File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/usr/lib/python3.10/http/client.py", line 976, in send self.connect() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 243, in connect self.sock = self._new_conn() File "/usr/local/lib/python3.10/dist-packages/urllib3/connection.py", line 218, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 845, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/dist-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 132, in req_wrapper response_data = func(**kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 130, in put return request("put", url, data=data, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.16.2.2', port=50070): Max retries exceeded with url: /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 2025-04-04 18:14:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:06 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:04 GMT, Fri, 04 Apr 2025 18:14:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:04 GMT, Fri, 04 Apr 2025 18:14:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:06 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:07 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:07 GMT, Fri, 04 Apr 2025 18:14:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:07 GMT, Fri, 04 Apr 2025 18:14:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:07 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:08 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:14:09 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:09 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:09 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:09 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:09 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:09 GMT, Fri, 04 Apr 2025 18:14:09 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:09 GMT, Fri, 04 Apr 2025 18:14:09 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:09 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:10 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:10 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:10 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:10 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:10 GMT, Fri, 04 Apr 2025 18:14:10 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:10 GMT, Fri, 04 Apr 2025 18:14:10 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:10 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:11 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:14:12 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:12 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:12 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:12 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:12 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:12 GMT, Fri, 04 Apr 2025 18:14:12 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:12 GMT, Fri, 04 Apr 2025 18:14:12 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:12 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:13 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:13 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:13 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:13 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RetriableException","javaClassName":"org.apache.hadoop.ipc.RetriableException","message":"Namenode is in startup mode"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:13 GMT, Fri, 04 Apr 2025 18:14:13 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:13 GMT, Fri, 04 Apr 2025 18:14:13 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:13 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:14 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:14:15 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:15 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:15 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:15 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:15 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to find datanode, suggest to check cluster health."}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:15 GMT, Fri, 04 Apr 2025 18:14:15 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:15 GMT, Fri, 04 Apr 2025 18:14:15 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:15 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:16 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:16 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:16 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:16 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to find datanode, suggest to check cluster health."}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:16 GMT, Fri, 04 Apr 2025 18:14:16 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:16 GMT, Fri, 04 Apr 2025 18:14:16 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:16 [ 670 ] ERROR : unexpected response_data.status_code 403 != 307 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:17 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 211, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true 2025-04-04 18:14:18 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:18 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:18 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:18 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:18 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:18 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:18 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:18 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:19 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:19 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 28 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:18 GMT, Fri, 04 Apr 2025 18:14:18 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:19 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:20 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:20 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:20 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:20 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 27 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:20 GMT, Fri, 04 Apr 2025 18:14:20 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:20 GMT, Fri, 04 Apr 2025 18:14:20 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:20 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:21 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:22 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:22 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:22 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:22 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:22 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:22 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:22 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:22 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:22 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:22 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 25 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:22 GMT, Fri, 04 Apr 2025 18:14:22 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:22 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:23 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:23 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:23 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:23 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 24 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:23 GMT, Fri, 04 Apr 2025 18:14:23 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:23 GMT, Fri, 04 Apr 2025 18:14:23 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:23 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:24 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:25 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:25 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:25 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:25 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:25 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:25 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:25 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:25 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:25 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:25 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 22 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:25 GMT, Fri, 04 Apr 2025 18:14:25 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:25 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:26 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:26 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:26 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:26 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 21 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:26 GMT, Fri, 04 Apr 2025 18:14:26 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:26 GMT, Fri, 04 Apr 2025 18:14:26 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:26 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:27 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:28 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:28 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:28 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:28 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:28 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:28 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:28 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:28 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:28 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:28 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 19 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:28 GMT, Fri, 04 Apr 2025 18:14:28 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:28 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:29 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:29 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:29 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:29 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 18 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:29 GMT, Fri, 04 Apr 2025 18:14:29 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:29 GMT, Fri, 04 Apr 2025 18:14:29 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:29 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:30 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:31 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:31 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:31 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:31 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:31 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:31 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:31 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:31 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:31 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:31 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 15 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:31 GMT, Fri, 04 Apr 2025 18:14:31 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:31 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:32 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:32 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:32 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:32 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 14 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:32 GMT, Fri, 04 Apr 2025 18:14:32 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:32 GMT, Fri, 04 Apr 2025 18:14:32 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:32 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:33 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:34 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:34 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:34 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:34 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:34 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:34 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:34 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:34 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:34 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:34 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 12 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:34 GMT, Fri, 04 Apr 2025 18:14:34 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:34 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:35 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:35 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:35 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:35 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 11 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:35 GMT, Fri, 04 Apr 2025 18:14:35 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:35 GMT, Fri, 04 Apr 2025 18:14:35 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:35 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:36 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:37 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:37 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:37 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:37 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:37 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:37 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:37 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:37 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:37 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:37 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:37 GMT, Fri, 04 Apr 2025 18:14:37 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:37 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:38 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:38 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:38 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:38 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 8 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:38 GMT, Fri, 04 Apr 2025 18:14:38 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:38 GMT, Fri, 04 Apr 2025 18:14:38 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:38 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:39 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:40 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:40 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:40 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:40 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:40 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:40 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:40 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:40 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:40 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:40 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:40 GMT, Fri, 04 Apr 2025 18:14:40 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:40 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:41 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:41 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:41 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:41 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 5 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:41 GMT, Fri, 04 Apr 2025 18:14:41 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:41 GMT, Fri, 04 Apr 2025 18:14:41 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:41 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:42 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:43 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:43 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:43 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:43 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:43 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:43 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:43 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:43 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:43 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:43 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 3 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:43 GMT, Fri, 04 Apr 2025 18:14:43 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:43 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:44 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:44 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:44 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:44 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 2 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:44 GMT, Fri, 04 Apr 2025 18:14:44 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:44 GMT, Fri, 04 Apr 2025 18:14:44 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:44 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:45 [ 670 ] ERROR : Can't connect to HDFS or preparations are not done yet 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root (cluster.py:2572, wait_hdfs_to_start) Traceback (most recent call last): File "/ClickHouse/tests/integration/helpers/cluster.py", line 2565, in wait_hdfs_to_start self.hdfs_api.write_data("/somefilewithrandomname222", "1") File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 242, in write_data response = self.req_wrapper( File "/ClickHouse/tests/integration/helpers/hdfs_api.py", line 143, in req_wrapper response_data.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root 2025-04-04 18:14:46 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /somefilewithrandomname222 user:root, principal:None (hdfs_api.py:194, write_data) 2025-04-04 18:14:46 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/somefilewithrandomname222?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:46 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:46 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:46 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:46 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:228, write_data) 2025-04-04 18:14:46 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:46 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:46 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 403 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:46 [ 670 ] DEBUG : response_data:b'{"RemoteException":{"exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException","message":"Cannot create file/somefilewithrandomname222. Name node is in safe mode.\\nThe reported blocks 31 has reached the threshold 0.9990 of total blocks 31. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 0 seconds.\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2630)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\\n"}}' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:46 GMT, Fri, 04 Apr 2025 18:14:46 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:46 [ 670 ] ERROR : unexpected response_data.status_code 403 != 201 (hdfs_api.py:139, req_wrapper) 2025-04-04 18:14:47 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/somefilewithrandomname222', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:14:47 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:48 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/somefilewithrandomname222?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Fsomefilewithrandomname222&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:48 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/somefilewithrandomname222', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:14:48 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Date': 'Fri, 04 Apr 2025 18:14:47 GMT, Fri, 04 Apr 2025 18:14:47 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/somefilewithrandomname222', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:253, write_data) 2025-04-04 18:14:48 [ 670 ] DEBUG : Connected to HDFS and SafeMode disabled! (cluster.py:2566, wait_hdfs_to_start) 2025-04-04 18:14:48 [ 670 ] INFO : Trying to create Minio instance by command docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d (cluster.py:3132, start) 2025-04-04 18:14:48 [ 670 ] DEBUG : Command:[docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] (cluster.py:122, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:48Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Volume "rootteststorageiceberg-gw0_data1-1" Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Volume "rootteststorageiceberg-gw0_data1-1" Created (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:48Z" level=warning msg="Found orphan containers ([rootteststorageiceberg-gw0-hdfs1-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: proxy2 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: proxy1 The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-minio1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-resolver-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-minio1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-resolver-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-minio1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-resolver-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-resolver-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-minio1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:49Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:49Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:14:49 [ 670 ] INFO : Trying to connect to Minio... (cluster.py:3138, start) 2025-04-04 18:14:49 [ 670 ] DEBUG : get_instance_ip instance_name=minio1 (cluster.py:2082, get_instance_ip) 2025-04-04 18:14:49 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-minio1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:49 [ 670 ] DEBUG : get_instance_ip instance_name=proxy1 (cluster.py:2082, get_instance_ip) 2025-04-04 18:14:49 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-proxy1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:49 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.5:9001 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:49 [ 670 ] DEBUG : Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) (retry.py:517, increment) 2025-04-04 18:14:49 [ 670 ] WARNING : Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / (connectionpool.py:872, urlopen) 2025-04-04 18:14:49 [ 670 ] DEBUG : Starting new HTTP connection (2): 172.16.2.5:9001 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:49 [ 670 ] DEBUG : Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) (retry.py:517, increment) 2025-04-04 18:14:49 [ 670 ] WARNING : Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / (connectionpool.py:872, urlopen) 2025-04-04 18:14:49 [ 670 ] DEBUG : Starting new HTTP connection (3): 172.16.2.5:9001 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:49 [ 670 ] DEBUG : Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) (retry.py:517, increment) 2025-04-04 18:14:49 [ 670 ] WARNING : Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / (connectionpool.py:872, urlopen) 2025-04-04 18:14:49 [ 670 ] DEBUG : Starting new HTTP connection (4): 172.16.2.5:9001 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:49 [ 670 ] DEBUG : Can't connect to Minio: HTTPConnectionPool(host='172.16.2.5', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) (cluster.py:2637, wait_minio_to_start) 2025-04-04 18:14:50 [ 670 ] DEBUG : Starting new HTTP connection (5): 172.16.2.5:9001 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:50 [ 670 ] DEBUG : http://172.16.2.5:9001 "GET / HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:50 [ 670 ] DEBUG : Connected to Minio. (cluster.py:2617, wait_minio_to_start) 2025-04-04 18:14:50 [ 670 ] DEBUG : http://172.16.2.5:9001 "GET /root?location= HTTP/1.1" 404 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:50 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:50 [ 670 ] DEBUG : S3 bucket 'root' created (cluster.py:2632, wait_minio_to_start) 2025-04-04 18:14:50 [ 670 ] DEBUG : http://172.16.2.5:9001 "GET /root2?location= HTTP/1.1" 404 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:50 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root2 HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:50 [ 670 ] DEBUG : S3 bucket 'root2' created (cluster.py:2632, wait_minio_to_start) 2025-04-04 18:14:50 [ 670 ] INFO : Trying to create Azurite instance by command docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --verbose up -d (cluster.py:3143, start) 2025-04-04 18:14:50 [ 670 ] DEBUG : Command:[docker compose --project-name rootteststorageiceberg-gw0 --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --verbose up -d] (cluster.py:122, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:50Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:50Z" level=warning msg="Found orphan containers ([rootteststorageiceberg-gw0-minio1-1 rootteststorageiceberg-gw0-resolver-1 rootteststorageiceberg-gw0-proxy2-1 rootteststorageiceberg-gw0-proxy1-1 rootteststorageiceberg-gw0-hdfs1-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:50Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] DEBUG : Stderr:time="2025-04-04T18:14:50Z" level=debug msg="otel error" error="" (cluster.py:148, run_and_check) 2025-04-04 18:14:50 [ 670 ] INFO : Trying to connect to Azurite (cluster.py:3157, start) 2025-04-04 18:14:51 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/?restype=REDACTED&comp=REDACTED' Request method: 'GET' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3a2b048-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request (_universal.py:514, on_request) 2025-04-04 18:14:51 [ 670 ] DEBUG : Starting new HTTP connection (1): 127.0.0.1:30000 (connectionpool.py:245, _new_conn) 2025-04-04 18:14:51 [ 670 ] DEBUG : http://127.0.0.1:30000 "GET /devstoreaccount1/?restype=account&comp=properties HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:51 [ 670 ] INFO : Response status: 200 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'x-ms-client-request-id': 'b3a2b048-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '4db9a46b-6801-421b-9156-84d49bc9f707' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'x-ms-sku-name': 'REDACTED' 'x-ms-account-kind': 'REDACTED' 'x-ms-is-hns-enabled': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:51 [ 670 ] DEBUG : {'client_request_id': 'b3a2b048-1180-11f0-918b-0242ac110002', 'request_id': '4db9a46b-6801-421b-9156-84d49bc9f707', 'version': '2025-05-05', 'date': datetime.datetime(2025, 4, 4, 18, 14, 51, tzinfo=datetime.timezone.utc), 'sku_name': 'Standard_RAGRS', 'account_kind': 'StorageV2', 'is_hns_enabled': False} (cluster.py:2665, wait_azurite_to_start) 2025-04-04 18:14:51 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/?comp=REDACTED&prefix=REDACTED&include=REDACTED' Request method: 'GET' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3a7c326-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request (_universal.py:514, on_request) 2025-04-04 18:14:51 [ 670 ] DEBUG : http://127.0.0.1:30000 "GET /devstoreaccount1/?comp=list&prefix=azurite-container&include= HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:51 [ 670 ] INFO : Response status: 200 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'x-ms-client-request-id': 'b3a7c326-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'd10d5a96-65ba-4a43-82e5-7833d18ca17b' 'x-ms-version': 'REDACTED' 'content-type': 'application/xml' 'Date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Transfer-Encoding': 'chunked' (_universal.py:550, on_response) 2025-04-04 18:14:51 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/azurite-container?restype=REDACTED' Request method: 'GET' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3a9214e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request (_universal.py:514, on_request) 2025-04-04 18:14:51 [ 670 ] DEBUG : http://127.0.0.1:30000 "GET /devstoreaccount1/azurite-container?restype=container HTTP/1.1" 404 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:51 [ 670 ] INFO : Response status: 404 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'x-ms-error-code': 'ContainerNotFound' 'x-ms-request-id': 'b487e38f-c9b6-4ad5-8947-fad625fc8bba' 'content-type': 'application/xml' 'Date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Transfer-Encoding': 'chunked' (_universal.py:550, on_response) 2025-04-04 18:14:51 [ 670 ] DEBUG : azurite container 'azurite-container' doesn't exist, creating it (cluster.py:2687, wait_azurite_to_start) 2025-04-04 18:14:51 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/azurite-container?restype=REDACTED' Request method: 'PUT' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b3aa1a36-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request (_universal.py:514, on_request) 2025-04-04 18:14:51 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/azurite-container?restype=container HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:51 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x23697E5161FF9A0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:51 GMT' 'x-ms-client-request-id': 'b3aa1a36-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'a7bb09c1-1043-4c39-9c5a-13b9497ae714' 'x-ms-version': 'REDACTED' 'Date': 'Fri, 04 Apr 2025 18:14:51 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:51 [ 670 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --project-name rootteststorageiceberg-gw0 --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml up -d --no-recreate') (cluster.py:3200, start) 2025-04-04 18:14:51 [ 670 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/.env --project-name rootteststorageiceberg-gw0 --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_hdfs.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_azurite.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_storage_iceberg/_instances-0-gw0/node3/docker-compose.yml up -d --no-recreate] (cluster.py:122, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy1-1 Running (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node2-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-hdfs1-1 Running (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node3-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-azurite1-1 Running (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-proxy2-1 Running (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-resolver-1 Running (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-minio1-1 Running (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node1-1 Creating (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node1-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node2-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node3-1 Created (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node2-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node3-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node1-1 Starting (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node3-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node2-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : Stderr: Container rootteststorageiceberg-gw0-node1-1 Started (cluster.py:148, run_and_check) 2025-04-04 18:14:52 [ 670 ] DEBUG : ClickHouse instance created (cluster.py:3208, start) 2025-04-04 18:14:52 [ 670 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2082, get_instance_ip) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : get_instance_ip instance_name=node1 (cluster.py:2092, get_instance_global_ipv6) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : Waiting for ClickHouse start in node1, ip: 172.16.2.10... (cluster.py:3216, start) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/e48f025d55ea38ce2cd0b70de2e18e332327cf16340191fe9711923c883af5c8/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : ClickHouse node1 started (cluster.py:3220, start) 2025-04-04 18:14:52 [ 670 ] DEBUG : get_instance_ip instance_name=node2 (cluster.py:2082, get_instance_ip) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : get_instance_ip instance_name=node2 (cluster.py:2092, get_instance_global_ipv6) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : Waiting for ClickHouse start in node2, ip: 172.16.2.8... (cluster.py:3216, start) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/6fa617605cb6f43203da96e10d8eea868705d7ee7253f90051dd0adb7f64faf6/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : ClickHouse node2 started (cluster.py:3220, start) 2025-04-04 18:14:52 [ 670 ] DEBUG : get_instance_ip instance_name=node3 (cluster.py:2082, get_instance_ip) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : get_instance_ip instance_name=node3 (cluster.py:2092, get_instance_global_ipv6) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : Waiting for ClickHouse start in node3, ip: 172.16.2.9... (cluster.py:3216, start) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/rootteststorageiceberg-gw0-node3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://localhost:None "GET /v1.46/containers/45983ddac6be8f632a73c266335998c606241ae86ac310435ce6d8560b6d3d10/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : ClickHouse node3 started (cluster.py:3220, start) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root?policy= HTTP/1.1" 204 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://172.16.2.5:9001 "GET /root-with-auth?location= HTTP/1.1" 404 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root-with-auth HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] INFO : S3 bucket created (test.py:114, started_cluster) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u SparkConf rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.SparkConf (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: i org.apache.spark.SparkConf bTrue e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro26 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.app.name sspark_test e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro27 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.master slocal e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro28 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.sql.catalog.spark_catalog sorg.apache.iceberg.spark.SparkSessionCatalog e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro29 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.sql.catalog.local sorg.apache.iceberg.spark.SparkCatalog e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro30 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.sql.catalog.spark_catalog.type shadoop e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro31 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.sql.catalog.spark_catalog.warehouse s/iceberg_data e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro32 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.sql.extensions sorg.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro33 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 get sspark.executor.allowSparkContext sfalse e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysfalse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 contains sspark.serializer.objectStreamReset e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.serializer.objectStreamReset s100 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro34 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 contains sspark.rdd.compress e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 set sspark.rdd.compress sTrue e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro35 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 contains sspark.master e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 contains sspark.app.name e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 contains sspark.master e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 get sspark.master e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yslocal (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 contains sspark.app.name e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 get sspark.app.name e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark_test (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 contains sspark.home e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o26 getAll e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yto36 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i0 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro37 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o37 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.master (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o37 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yslocal (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro38 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o38 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.sql.catalog.local (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o38 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysorg.apache.iceberg.spark.SparkCatalog (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro39 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o39 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.app.name (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o39 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark_test (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i3 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro40 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o40 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.rdd.compress (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o40 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysTrue (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i4 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro41 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o41 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.serializer.objectStreamReset (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o41 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys100 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i5 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro42 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o42 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.sql.catalog.spark_catalog.warehouse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o42 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys/iceberg_data (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i6 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro43 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o43 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.submit.pyFiles (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o43 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i7 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro44 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o44 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.sql.catalog.spark_catalog (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o44 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysorg.apache.iceberg.spark.SparkSessionCatalog (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i8 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro45 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o45 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.submit.deployMode (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o45 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysclient (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i9 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro46 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o46 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.app.submitTime (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o46 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys1743790268520 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i10 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro47 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o47 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.sql.catalog.spark_catalog.type (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o47 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yshadoop (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i11 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro48 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o48 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.ui.showConsoleProgress (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o48 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ystrue (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a g o36 i12 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro49 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o49 _1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysspark.sql.extensions (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o49 _2 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysorg.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: a e o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi13 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u JavaSparkContext rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.java.JavaSparkContext (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: i org.apache.spark.api.java.JavaSparkContext ro26 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro50 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro51 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o51 conf e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro52 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u PythonAccumulatorV2 rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonAccumulatorV2 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: i org.apache.spark.api.python.PythonAccumulatorV2 s127.0.0.1 i63455 s72f9fe8748d825832ed45c4c0694a4e33b9ab7881f74fda6fd8ba7daafb730cb e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro53 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro54 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o54 register ro53 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils isEncryptionEnabled e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils isEncryptionEnabled ro50 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout ro50 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yL15 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils getSparkBufferSize e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils getSparkBufferSize ro50 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi65536 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark.SparkFiles rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.SparkFiles (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.SparkFiles getRootDirectory e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.SparkFiles getRootDirectory e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/userFiles-4e28738d-ad67-4b1b-a2fe-2e49ce0c0ab3 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o52 get sspark.submit.pyFiles s e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util.Utils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.util.Utils (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.util.Utils getLocalDir e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro55 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o55 conf e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro56 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.util.Utils getLocalDir ro56 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u org.apache.spark.util.Utils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.util.Utils (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.util.Utils createTempDir e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.util.Utils createTempDir s/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676 spyspark e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro57 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o57 getAbsolutePath e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ys/tmp/spark-c232781e-7e38-46f8-81c0-e6e5de6d7676/pyspark-48f963c5-770c-4b07-8168-9c89af79cee4 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o52 get sspark.python.profile sfalse e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ysfalse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro58 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o58 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ybfalse (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro59 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yao60 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o60 put sspark.app.name sspark_test e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o60 put sspark.master slocal e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o60 put sspark.sql.catalog.spark_catalog sorg.apache.iceberg.spark.SparkSessionCatalog e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o60 put sspark.sql.catalog.local sorg.apache.iceberg.spark.SparkCatalog e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o60 put sspark.sql.catalog.spark_catalog.type shadoop e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o60 put sspark.sql.catalog.spark_catalog.warehouse s/iceberg_data e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o60 put sspark.sql.extensions sorg.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yn (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: i org.apache.spark.sql.SparkSession ro59 ro60 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro61 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession setDefaultSession e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession setDefaultSession ro61 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession setActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession setActiveSession ro61 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer?restype=REDACTED' Request method: 'PUT' Request headers: 'x-ms-version': 'REDACTED' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b4506e7c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' No body was attached to the request (_universal.py:514, on_request) 2025-04-04 18:14:52 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer?restype=container HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:52 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BE2F0449B7E2A0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:52 GMT' 'x-ms-client-request-id': 'b4506e7c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '5d0e4080-63b5-44a1-b8df-296361418435' 'x-ms-version': 'REDACTED' 'Date': 'Fri, 04 Apr 2025 18:14:52 GMT' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) ----------------------------- Captured stdout call ----------------------------- 25/04/04 18:14:55 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:55 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:55 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:55 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:56 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:56 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:14:58 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} ----------------------------- Captured stderr call ----------------------------- Command to send: c o50 sc e Answer received: !yro62 Command to send: c o62 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Command to send: m d o27 e Answer received: !yv Command to send: m d o28 e Answer received: !yv Command to send: m d o29 e Answer received: !yv Command to send: m d o30 e Answer received: !yv Command to send: m d o31 e Answer received: !yv Command to send: m d o32 e Answer received: !yv Command to send: m d o33 e Answer received: !yv Command to send: m d o34 e Answer received: !yv Command to send: m d o35 e Answer received: !yv Command to send: m d o36 e Answer received: !yv Command to send: m d o37 e Answer received: !yv Command to send: m d o38 e Answer received: !yv Command to send: m d o39 e Answer received: !yv Command to send: m d o40 e Answer received: !yv Command to send: m d o41 e Answer received: !yv Command to send: m d o42 e Answer received: !yv Command to send: m d o43 e Answer received: !yv Command to send: m d o44 e Answer received: !yv Command to send: m d o45 e Answer received: !yv Command to send: m d o46 e Answer received: !yv Command to send: m d o47 e Answer received: !yv Command to send: m d o48 e Answer received: !yv Command to send: m d o49 e Answer received: !yv Command to send: m d o51 e Answer received: !yv Command to send: m d o54 e Answer received: !yv Command to send: m d o55 e Answer received: !yv Command to send: m d o56 e Answer received: !yv Command to send: m d o57 e Answer received: !yv Command to send: m d o58 e Answer received: !yv Command to send: m d o60 e Answer received: !yv Answer received: !yro63 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo64 Command to send: c o64 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro64 e Answer received: !yro65 Command to send: c o63 toDF ro65 e Answer received: !yro66 Command to send: c o50 sc e Answer received: !yro67 Command to send: c o67 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro68 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo69 Command to send: c o69 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro69 e Answer received: !yro70 Command to send: c o68 toDF ro70 e Answer received: !yro71 Command to send: c o71 apply sb e Answer received: !yro72 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro73 Command to send: c o73 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro74 Command to send: c o74 get e Answer received: !yro75 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro76 Command to send: i java.util.HashMap e Answer received: !yao77 Command to send: c o76 applyModifiableSettings ro75 ro77 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro78 Command to send: c o72 cast ro78 e Answer received: !yro79 Command to send: c o71 withColumn sb ro79 e Answer received: !yro80 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro81 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro82 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo83 Command to send: c o83 add ro82 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro83 e Answer received: !yro84 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro84 e Answer received: !yro85 Command to send: c o81 over ro85 e Answer received: !yro86 Command to send: c o66 withColumn srow_index ro86 e Command to send: m d o64 e Answer received: !yv Command to send: m d o69 e Answer received: !yv Command to send: m d o77 e Answer received: !yv Command to send: m d o62 e Answer received: !yv Command to send: m d o63 e Answer received: !yv Command to send: m d o65 e Answer received: !yv Command to send: m d o67 e Answer received: !yv Command to send: m d o68 e Answer received: !yv Command to send: m d o70 e Answer received: !yv Command to send: m d o71 e Answer received: !yv Command to send: m d o72 e Answer received: !yv Command to send: m d o73 e Answer received: !yv Command to send: m d o74 e Answer received: !yv Command to send: m d o76 e Answer received: !yv Command to send: m d o78 e Answer received: !yv Command to send: m d o79 e Answer received: !yv Command to send: m d o83 e Answer received: !yv Answer received: !yro87 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro88 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro89 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo90 Command to send: c o90 add ro89 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro90 e Answer received: !yro91 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro91 e Answer received: !yro92 Command to send: c o88 over ro92 e Answer received: !yro93 Command to send: c o80 withColumn srow_index ro93 e Answer received: !yro94 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo95 Command to send: c o95 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro95 e Answer received: !yro96 Command to send: c o87 join ro94 ro96 sinner e Answer received: !yro97 Command to send: c o97 drop srow_index e Answer received: !yro98 Command to send: c o98 writeTo stest_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 e Answer received: !yro99 Command to send: c o99 tableProperty sformat-version s1 e Answer received: !yro100 Command to send: c o99 using siceberg e Answer received: !yro101 Command to send: c o99 create e Command to send: m d o90 e Answer received: !yv Command to send: m d o95 e Answer received: !yv Command to send: m d o100 e Answer received: !yv Command to send: m d o101 e Answer received: !yv [Stage 1:> (0 + 1) / 1] Answer received: !yv Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b77e1a54-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x211BE8E72A2CF00"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b77e1a54-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'cb9607f7-e198-40ed-b4ce-d4f6141d39d7' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro' Request method: 'PUT' Request headers: 'Content-Length': '3797' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b780ada0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x260D044D1305DA0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b780ada0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '2976d01d-36ba-4830-b2a4-6b5b45111470' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5822' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7827bd0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25318EF6806F2C0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7827bd0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c7a53051-be1d-4de2-9b5d-cb2d91a64f32' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b783e736-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x200448A43957800"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b783e736-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b70a3bb2-4119-484e-9978-f7ecebeda068' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '2180' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b785775e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x230FB0219706CC0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b785775e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '57c7a94a-9e7c-4026-8fc6-97fe662cb447' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro102 Command to send: c o102 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro103 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo104 Command to send: c o104 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro104 e Answer received: !yro105 Command to send: c o103 toDF ro105 e Answer received: !yro106 Command to send: c o50 sc e Answer received: !yro107 Command to send: c o107 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro108 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo109 Command to send: c o109 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro109 e Answer received: !yro110 Command to send: c o108 toDF ro110 e Answer received: !yro111 Command to send: c o111 apply sb e Answer received: !yro112 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro113 Command to send: c o113 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro114 Command to send: c o114 get e Answer received: !yro115 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro116 Command to send: i java.util.HashMap e Answer received: !yao117 Command to send: c o116 applyModifiableSettings ro115 ro117 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro118 Command to send: c o112 cast ro118 e Answer received: !yro119 Command to send: c o111 withColumn sb ro119 e Answer received: !yro120 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro121 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro122 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo123 Command to send: c o123 add ro122 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro123 e Answer received: !yro124 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro124 e Answer received: !yro125 Command to send: c o121 over ro125 e Answer received: !yro126 Command to send: c o106 withColumn srow_index ro126 e Answer received: !yro127 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro128 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro129 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo130 Command to send: c o130 add ro129 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro130 e Answer received: !yro131 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro131 e Answer received: !yro132 Command to send: c o128 over ro132 e Answer received: !yro133 Command to send: c o120 withColumn srow_index ro133 e Answer received: !yro134 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo135 Command to send: c o135 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro135 e Answer received: !yro136 Command to send: c o127 join ro134 ro136 sinner e Answer received: !yro137 Command to send: c o137 drop srow_index e Answer received: !yro138 Command to send: c o138 writeTo stest_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 e Answer received: !yro139 Command to send: c o139 append e Answer received: !yv Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d0973e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1EEA475E66CC560"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d0973e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'fee9771d-e06b-43dd-b786-40d367a2a657' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d2b38e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x223766CF34C4760"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d2b38e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '972dbe02-013b-4db6-aafd-8dac8191228c' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3230' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d4a41e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x216822A81B10B40"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d4a41e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f2388869-1897-4216-a4d1-bf6954741023' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5824' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d60d68-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1B609780BAC8AE0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d60d68-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '3f0cd328-676f-482a-9f6d-4973532c21ad' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro' Request method: 'PUT' Request headers: 'Content-Length': '3866' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d7713a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x255BE24C6DDFA20"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d7713a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '39926839-5997-4b17-ac03-5baa7c7cd578' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro' Request method: 'PUT' Request headers: 'Content-Length': '3797' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d8b220-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x24D676E311986E0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d8b220-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ce0e8560-9666-4ed7-926c-bcc601b0b3d3' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5822' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d9e2e4-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BCE38CBE6AC2F0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d9e2e4-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f366949c-64b7-4a5a-86cf-45acebf1fd08' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7db28a2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x202F4D9B0232220"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7db28a2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '3f204434-7d8d-4088-af22-0ae01792a924' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '2180' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7dc2f7c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x22919A81BA9E760"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7dc2f7c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '8ea2afa9-50c6-4020-985a-d390546bd83d' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro140 Command to send: c o140 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro141 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo142 Command to send: c o142 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro142 e Answer received: !yro143 Command to send: c o141 toDF ro143 e Answer received: !yro144 Command to send: c o50 sc e Answer received: !yro145 Command to send: c o145 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro146 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo147 Command to send: c o147 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro147 e Answer received: !yro148 Command to send: c o146 toDF ro148 e Answer received: !yro149 Command to send: c o149 apply sb e Answer received: !yro150 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro151 Command to send: c o151 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro152 Command to send: c o152 get e Answer received: !yro153 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro154 Command to send: i java.util.HashMap e Answer received: !yao155 Command to send: c o154 applyModifiableSettings ro153 ro155 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro156 Command to send: c o150 cast ro156 e Answer received: !yro157 Command to send: c o149 withColumn sb ro157 e Answer received: !yro158 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro159 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro160 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo161 Command to send: c o161 add ro160 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro161 e Answer received: !yro162 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro162 e Answer received: !yro163 Command to send: c o159 over ro163 e Answer received: !yro164 Command to send: c o144 withColumn srow_index ro164 e Answer received: !yro165 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro166 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro167 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo168 Command to send: c o168 add ro167 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro168 e Answer received: !yro169 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro169 e Answer received: !yro170 Command to send: c o166 over ro170 e Answer received: !yro171 Command to send: c o158 withColumn srow_index ro171 e Answer received: !yro172 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo173 Command to send: c o173 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro173 e Answer received: !yro174 Command to send: c o165 join ro172 ro174 sinner e Answer received: !yro175 Command to send: c o175 drop srow_index e Answer received: !yro176 Command to send: c o176 writeTo stest_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 e Answer received: !yro177 Command to send: c o177 append e Command to send: m d o82 e Answer received: !yv Command to send: m d o84 e Answer received: !yv Command to send: m d o85 e Answer received: !yv Command to send: m d o86 e Answer received: !yv Command to send: m d o87 e Answer received: !yv Command to send: m d o88 e Answer received: !yv Command to send: m d o89 e Answer received: !yv Command to send: m d o91 e Answer received: !yv Command to send: m d o92 e Answer received: !yv Command to send: m d o93 e Answer received: !yv Command to send: m d o94 e Answer received: !yv Command to send: m d o96 e Answer received: !yv Command to send: m d o97 e Answer received: !yv Command to send: m d o98 e Answer received: !yv Command to send: m d o99 e Answer received: !yv Command to send: m d o104 e Answer received: !yv Command to send: m d o109 e Answer received: !yv Command to send: m d o117 e Answer received: !yv Command to send: m d o123 e Answer received: !yv Command to send: m d o130 e Answer received: !yv Command to send: m d o135 e Answer received: !yv Command to send: m d o102 e Answer received: !yv Command to send: m d o103 e Answer received: !yv Command to send: m d o105 e Answer received: !yv Command to send: m d o106 e Answer received: !yv Command to send: m d o107 e Answer received: !yv Command to send: m d o108 e Answer received: !yv Command to send: m d o110 e Answer received: !yv Command to send: m d o111 e Answer received: !yv Command to send: m d o112 e Answer received: !yv Command to send: m d o113 e Answer received: !yv Command to send: m d o114 e Answer received: !yv Command to send: m d o116 e Answer received: !yv Command to send: m d o118 e Answer received: !yv Command to send: m d o119 e Answer received: !yv Command to send: m d o120 e Answer received: !yv Command to send: m d o121 e Answer received: !yv Command to send: m d o122 e Answer received: !yv Command to send: m d o124 e Answer received: !yv Command to send: m d o125 e Answer received: !yv Command to send: m d o126 e Answer received: !yv Command to send: m d o127 e Answer received: !yv Command to send: m d o128 e Answer received: !yv Command to send: m d o129 e Answer received: !yv Command to send: m d o131 e Answer received: !yv Command to send: m d o132 e Answer received: !yv Command to send: m d o133 e Answer received: !yv Command to send: m d o134 e Answer received: !yv Command to send: m d o136 e Answer received: !yv Command to send: m d o137 e Answer received: !yv Command to send: m d o138 e Answer received: !yv Command to send: m d o139 e Answer received: !yv Command to send: m d o142 e Answer received: !yv Command to send: m d o147 e Answer received: !yv Command to send: m d o155 e Answer received: !yv Command to send: m d o161 e Answer received: !yv Command to send: m d o168 e Answer received: !yv Command to send: m d o140 e Answer received: !yv Command to send: m d o141 e Answer received: !yv Command to send: m d o143 e Answer received: !yv Command to send: m d o144 e Answer received: !yv Command to send: m d o145 e Answer received: !yv Command to send: m d o146 e Answer received: !yv Command to send: m d o148 e Answer received: !yv Command to send: m d o149 e Answer received: !yv Command to send: m d o150 e Answer received: !yv Command to send: m d o151 e Answer received: !yv Command to send: m d o152 e Answer received: !yv Command to send: m d o154 e Answer received: !yv Command to send: m d o156 e Answer received: !yv Command to send: m d o157 e Answer received: !yv Command to send: m d o159 e Answer received: !yv Command to send: m d o160 e Answer received: !yv Command to send: m d o162 e Answer received: !yv Command to send: m d o163 e Answer received: !yv Command to send: m d o164 e Answer received: !yv Command to send: m d o167 e Answer received: !yv Command to send: m d o169 e Answer received: !yv Command to send: m d o173 e Answer received: !yv Answer received: !yv Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b8154af0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x20DBAD95F172CE0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b8154af0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '4156b273-d71e-4086-aaad-9fe1ab08ce6f' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81723f2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25D8FA2B6026620"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81723f2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b4654274-f0e3-440a-acbc-fa1211df541a' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b818e6b0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x21EF899421CEF80"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b818e6b0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b2311377-95cd-477f-8b22-4773f7a1c801' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro' Request method: 'PUT' Request headers: 'Content-Length': '3909' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81a7bc4-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2479C8CD926BC00"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81a7bc4-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '44d1e0e4-1b54-4d23-970b-14d4701d7fe1' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3230' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81c08fe-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x23A4DE3F4B93AC0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81c08fe-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b579577c-8815-4b80-931a-9ee9e441b868' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5824' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81db14a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BB7EB4FB556EA0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81db14a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ebfd3760-f3d6-4b1c-b37b-a50d97bdd697' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro' Request method: 'PUT' Request headers: 'Content-Length': '3866' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81f10a8-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1F9A9E5E67448C0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81f10a8-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f7620db5-2220-4b65-9dc6-a594226f2144' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro' Request method: 'PUT' Request headers: 'Content-Length': '3797' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b8208118-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1E4178E84552250"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b8208118-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ed1f102c-e22e-4867-b25f-c33f94406dbf' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5822' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b82242d2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1F14CD748A7F460"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b82242d2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '542da411-d21d-47dc-80ba-a79e0c49f65e' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b823efa6-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1E0E206718F4AB0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b823efa6-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '9975841a-f989-4b83-a834-a1d2d993928e' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '2180' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b825510c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x24D70F240081BE0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b825510c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'cb8d2f6b-4153-4578-8a4b-b46b8b3cdb73' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5823' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b826bef2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BA34D358D4A570"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b826bef2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '788c4569-b150-4459-9f4b-4a20ece4bbc1' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '4281' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b8281c3e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x202778F1398A2A0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b8281c3e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '7c3478e6-33d3-4591-b769-728467203716' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json'] Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json'] Executing query SELECT * FROM system.clusters on node1 Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) on node1 Executing query SELECT * FROM icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) on node1 Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) SETTINGS object_storage_cluster='cluster_simple' on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8; CREATE TABLE test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) SETTINGS object_storage_cluster = 'cluster_simple' on node1 Command to send: m d o26 e Answer received: !yv Command to send: m d o59 e Answer received: !yv Command to send: m d o66 e Answer received: !yv Command to send: m d o80 e Answer received: !yv Command to send: m d o81 e Answer received: !yv Command to send: m d o75 e Answer received: !yv Command to send: m d o115 e Answer received: !yv Command to send: m d o170 e Answer received: !yv Command to send: m d o171 e Answer received: !yv Command to send: m d o158 e Answer received: !yv Command to send: m d o165 e Answer received: !yv Command to send: m d o166 e Answer received: !yv Command to send: m d o172 e Answer received: !yv Command to send: m d o174 e Answer received: !yv Command to send: m d o175 e Answer received: !yv Command to send: m d o176 e Answer received: !yv Command to send: m d o177 e Answer received: !yv Executing query SELECT * FROM test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 on node1 Executing query SELECT * FROM remote('node2', icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) ) on node1 Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8` SYNC on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8; CREATE TABLE test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) on node1 Executing query SELECT * FROM test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 on node1 Executing query SELECT * FROM test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 SETTINGS object_storage_cluster='cluster_simple' on node1 ------------------------------ Captured log call ------------------------------- 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yro62 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o62 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o27 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o28 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o29 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o30 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o31 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o32 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o33 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o34 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o35 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o36 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o37 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o38 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o39 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o40 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o41 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o42 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o43 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o44 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o45 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o46 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o47 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o48 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o49 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o51 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o54 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o55 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o56 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o57 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o58 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Command to send: m d o60 e (clientserver.py:501, send_command) 2025-04-04 18:14:52 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro63 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ylo64 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o64 add sa e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro64 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro65 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o63 toDF ro65 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro66 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro67 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o67 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro68 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ylo69 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o69 add sb e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro69 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro70 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o68 toDF ro70 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro71 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o71 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro72 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro73 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o73 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro74 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o74 get e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro75 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro76 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yao77 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o76 applyModifiableSettings ro75 ro77 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro78 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o72 cast ro78 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro79 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o71 withColumn sb ro79 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro80 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro81 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro82 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ylo83 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o83 add ro82 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro83 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro84 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro84 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro85 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o81 over ro85 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro86 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o66 withColumn srow_index ro86 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o64 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o69 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o77 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o62 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o63 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o65 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o67 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o68 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o70 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o71 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o72 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o73 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o74 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o76 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o78 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o79 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: m d o83 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro87 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro88 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro89 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ylo90 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o90 add ro89 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro90 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro91 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro91 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro92 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o88 over ro92 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro93 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o80 withColumn srow_index ro93 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro94 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ylo95 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o95 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro95 e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro96 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o87 join ro94 ro96 sinner e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro97 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o97 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Answer received: !yro98 (clientserver.py:512, send_command) 2025-04-04 18:14:54 [ 670 ] DEBUG : Command to send: c o98 writeTo stest_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Answer received: !yro99 (clientserver.py:512, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Command to send: c o99 tableProperty sformat-version s1 e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Answer received: !yro100 (clientserver.py:512, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Command to send: c o99 using siceberg e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Answer received: !yro101 (clientserver.py:512, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Command to send: c o99 create e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Command to send: m d o90 e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Command to send: m d o95 e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Command to send: m d o100 e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Command to send: m d o101 e (clientserver.py:501, send_command) 2025-04-04 18:14:55 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b77e1a54-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x211BE8E72A2CF00"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b77e1a54-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'cb9607f7-e198-40ed-b4ce-d4f6141d39d7' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro' Request method: 'PUT' Request headers: 'Content-Length': '3797' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b780ada0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x260D044D1305DA0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b780ada0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '2976d01d-36ba-4830-b2a4-6b5b45111470' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5822' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7827bd0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25318EF6806F2C0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7827bd0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c7a53051-be1d-4de2-9b5d-cb2d91a64f32' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b783e736-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x200448A43957800"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b783e736-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b70a3bb2-4119-484e-9978-f7ecebeda068' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '2180' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b785775e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x230FB0219706CC0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b785775e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '57c7a94a-9e7c-4026-8fc6-97fe662cb447' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro102 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o102 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro103 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo104 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o104 add sa e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro104 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro105 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o103 toDF ro105 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro106 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro107 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o107 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro108 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo109 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o109 add sb e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro109 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro110 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o108 toDF ro110 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro111 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o111 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro112 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro113 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o113 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro114 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o114 get e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro115 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro116 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yao117 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o116 applyModifiableSettings ro115 ro117 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro118 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o112 cast ro118 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro119 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o111 withColumn sb ro119 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro120 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro121 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro122 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo123 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o123 add ro122 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro123 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro124 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro124 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro125 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o121 over ro125 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro126 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o106 withColumn srow_index ro126 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro127 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro128 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro129 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo130 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o130 add ro129 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro130 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro131 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro131 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro132 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o128 over ro132 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro133 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o120 withColumn srow_index ro133 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro134 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo135 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o135 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro135 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro136 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o127 join ro134 ro136 sinner e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro137 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o137 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro138 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o138 writeTo stest_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro139 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o139 append e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d0973e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1EEA475E66CC560"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d0973e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'fee9771d-e06b-43dd-b786-40d367a2a657' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d2b38e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x223766CF34C4760"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d2b38e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '972dbe02-013b-4db6-aafd-8dac8191228c' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3230' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d4a41e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x216822A81B10B40"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d4a41e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f2388869-1897-4216-a4d1-bf6954741023' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5824' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d60d68-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1B609780BAC8AE0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d60d68-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '3f0cd328-676f-482a-9f6d-4973532c21ad' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro' Request method: 'PUT' Request headers: 'Content-Length': '3866' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d7713a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x255BE24C6DDFA20"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d7713a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '39926839-5997-4b17-ac03-5baa7c7cd578' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro' Request method: 'PUT' Request headers: 'Content-Length': '3797' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d8b220-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x24D676E311986E0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d8b220-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ce0e8560-9666-4ed7-926c-bcc601b0b3d3' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5822' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7d9e2e4-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BCE38CBE6AC2F0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7d9e2e4-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f366949c-64b7-4a5a-86cf-45acebf1fd08' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7db28a2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x202F4D9B0232220"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7db28a2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '3f204434-7d8d-4088-af22-0ae01792a924' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '2180' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b7dc2f7c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:58 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:58 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x22919A81BA9E760"' 'last-modified': 'Fri, 04 Apr 2025 18:14:58 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b7dc2f7c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '8ea2afa9-50c6-4020-985a-d390546bd83d' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:58 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:58 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro140 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o140 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro141 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo142 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o142 add sa e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro142 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro143 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o141 toDF ro143 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro144 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro145 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o145 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro146 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo147 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o147 add sb e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro147 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro148 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o146 toDF ro148 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro149 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o149 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro150 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro151 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o151 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro152 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o152 get e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro153 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro154 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yao155 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o154 applyModifiableSettings ro153 ro155 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro156 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o150 cast ro156 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro157 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o149 withColumn sb ro157 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro158 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro159 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro160 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo161 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o161 add ro160 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro161 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro162 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro162 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro163 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o159 over ro163 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro164 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o144 withColumn srow_index ro164 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro165 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro166 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro167 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo168 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o168 add ro167 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro168 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro169 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro169 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro170 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o166 over ro170 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro171 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o158 withColumn srow_index ro171 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro172 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ylo173 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o173 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro173 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro174 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o165 join ro172 ro174 sinner e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro175 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o175 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro176 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o176 writeTo stest_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yro177 (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: c o177 append e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o82 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o84 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o85 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o86 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o87 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o88 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o89 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o91 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o92 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o93 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o94 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o96 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o97 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o98 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o99 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o104 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o109 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o117 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o123 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o130 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o135 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o102 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o103 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o105 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o106 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o107 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o108 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o110 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o111 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o112 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o113 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o114 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o116 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o118 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o119 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o120 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o121 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o122 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o124 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o125 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o126 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o127 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o128 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o129 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o131 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o132 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o133 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o134 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o136 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o137 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o138 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o139 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o142 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o147 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o155 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o161 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o168 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o140 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o141 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o143 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o144 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o145 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o146 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o148 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o149 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o150 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o151 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o152 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o154 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o156 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o157 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o159 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o160 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o162 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o163 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o164 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o167 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o169 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Command to send: m d o173 e (clientserver.py:501, send_command) 2025-04-04 18:14:58 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b8154af0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x20DBAD95F172CE0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b8154af0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '4156b273-d71e-4086-aaad-9fe1ab08ce6f' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81723f2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25D8FA2B6026620"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81723f2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b4654274-f0e3-440a-acbc-fa1211df541a' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b818e6b0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x21EF899421CEF80"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b818e6b0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b2311377-95cd-477f-8b22-4773f7a1c801' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro' Request method: 'PUT' Request headers: 'Content-Length': '3909' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81a7bc4-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2479C8CD926BC00"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81a7bc4-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '44d1e0e4-1b54-4d23-970b-14d4701d7fe1' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3230' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81c08fe-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x23A4DE3F4B93AC0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81c08fe-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'b579577c-8815-4b80-931a-9ee9e441b868' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5824' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81db14a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BB7EB4FB556EA0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81db14a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ebfd3760-f3d6-4b1c-b37b-a50d97bdd697' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro' Request method: 'PUT' Request headers: 'Content-Length': '3866' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b81f10a8-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1F9A9E5E67448C0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b81f10a8-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f7620db5-2220-4b65-9dc6-a594226f2144' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro' Request method: 'PUT' Request headers: 'Content-Length': '3797' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b8208118-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1E4178E84552250"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b8208118-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ed1f102c-e22e-4867-b25f-c33f94406dbf' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5822' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b82242d2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1F14CD748A7F460"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b82242d2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '542da411-d21d-47dc-80ba-a79e0c49f65e' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b823efa6-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1E0E206718F4AB0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b823efa6-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '9975841a-f989-4b83-a834-a1d2d993928e' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '2180' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b825510c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x24D70F240081BE0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b825510c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'cb8d2f6b-4153-4578-8a4b-b46b8b3cdb73' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '5823' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b826bef2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1BA34D358D4A570"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b826bef2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '788c4569-b150-4459-9f4b-4a20ece4bbc1' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '4281' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b8281c3e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:14:59 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:14:59 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x202778F1398A2A0"' 'last-modified': 'Fri, 04 Apr 2025 18:14:59 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b8281c3e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '7c3478e6-33d3-4591-b769-728467203716' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:14:59 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:14:59 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json'] (test.py:645, add_df) 2025-04-04 18:14:59 [ 670 ] INFO : Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-1-bd2316d9-b06a-4f02-bdd8-6cc31e471f9b-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-3-392a9f76-1ea4-43ca-82e0-714fe11b24c2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/data/00000-5-1f321089-89c7-4476-ac65-0df2bd8b754e-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4087525721751245634-1-8b3c914d-3221-4dcb-aae6-b229db6f71fe.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/5cf8117a-f276-45ea-9dbd-84dd74fc7db8-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-5409552635227421671-1-5cf8117a-f276-45ea-9dbd-84dd74fc7db8.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/snap-4818665702231351300-1-2ed14062-58ff-46be-82c9-2511ce7d99a7.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/2ed14062-58ff-46be-82c9-2511ce7d99a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/8b3c914d-3221-4dcb-aae6-b229db6f71fe-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/metadata/v3.metadata.json'] (test.py:653, test_cluster_table_function) 2025-04-04 18:14:59 [ 670 ] DEBUG : Executing query SELECT * FROM system.clusters on node1 (cluster.py:3677, query) 2025-04-04 18:14:59 [ 670 ] INFO : Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N (test.py:657, test_cluster_table_function) 2025-04-04 18:14:59 [ 670 ] DEBUG : Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) on node1 (cluster.py:3677, query) 2025-04-04 18:14:59 [ 670 ] DEBUG : Executing query SELECT * FROM icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) on node1 (cluster.py:3677, query) 2025-04-04 18:14:59 [ 670 ] DEBUG : Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:14:59 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8; CREATE TABLE test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) SETTINGS object_storage_cluster = 'cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o26 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o59 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o66 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o80 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o81 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o75 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o115 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o170 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o171 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o158 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o165 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o166 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o172 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o174 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o175 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o176 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Command to send: m d o177 e (clientserver.py:501, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:14:59 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 on node1 (cluster.py:3677, query) 2025-04-04 18:15:00 [ 670 ] DEBUG : Executing query SELECT * FROM remote('node2', icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) ) on node1 (cluster.py:3677, query) 2025-04-04 18:15:00 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8` SYNC on node1 (cluster.py:3677, query) 2025-04-04 18:15:00 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8; CREATE TABLE test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8/', format=Parquet) on node1 (cluster.py:3677, query) 2025-04-04 18:15:00 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 on node1 (cluster.py:3677, query) 2025-04-04 18:15:00 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_azure_1dbc19cd_7125_49c1_934a_ba4621939fe8 SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) _____________________ test_cluster_table_function[azure-2] _____________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = format_version = '2', storage_type = 'azure' @pytest.mark.parametrize("format_version", ["1", "2"]) @pytest.mark.parametrize("storage_type", ["s3", "azure", "hdfs"]) def test_cluster_table_function(started_cluster, format_version, storage_type): if is_arm() and storage_type == "hdfs": pytest.skip("Disabled test IcebergHDFS for aarch64") instance = started_cluster.instances["node1"] spark = started_cluster.spark_session TABLE_NAME = ( "test_iceberg_cluster_" + format_version + "_" + storage_type + "_" + get_uuid_str() ) def add_df(mode): write_iceberg_from_df( spark, generate_data(spark, 0, 100), TABLE_NAME, mode=mode, format_version=format_version, ) files = default_upload_directory( started_cluster, storage_type, f"/iceberg_data/default/{TABLE_NAME}/", f"/iceberg_data/default/{TABLE_NAME}/", ) logging.info(f"Adding another dataframe. result files: {files}") return files files = add_df(mode="overwrite") for i in range(1, len(started_cluster.instances)): files = add_df(mode="append") logging.info(f"Setup complete. files: {files}") assert len(files) == 5 + 4 * (len(started_cluster.instances) - 1) clusters = instance.query(f"SELECT * FROM system.clusters") logging.info(f"Clusters setup: {clusters}") # Regular Query only node1 table_function_expr = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True ) select_regular = ( instance.query(f"SELECT * FROM {table_function_expr}").strip().split() ) # Cluster Query with node1 as coordinator table_function_expr_cluster = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True, run_on_cluster=True, ) query_id_cluster = str(uuid.uuid4()) select_cluster = ( instance.query( f"SELECT * FROM {table_function_expr_cluster}", query_id=query_id_cluster ) .strip() .split() ) # Cluster Query with node1 as coordinator with alternative syntax query_id_cluster_alt_syntax = str(uuid.uuid4()) select_cluster_alt_syntax = ( instance.query( f""" SELECT * FROM {table_function_expr} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_cluster_alt_syntax, ) .strip() .split() ) create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster, object_storage_cluster='cluster_simple') query_id_cluster_table_engine = str(uuid.uuid4()) select_cluster_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_cluster_table_engine, ) .strip() .split() ) select_remote_cluster = ( instance.query(f"SELECT * FROM remote('node2',{table_function_expr_cluster})") .strip() .split() ) instance.query(f"DROP TABLE IF EXISTS `{TABLE_NAME}` SYNC") create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster) query_id_pure_table_engine = str(uuid.uuid4()) select_pure_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_pure_table_engine, ) .strip() .split() ) query_id_pure_table_engine_cluster = str(uuid.uuid4()) select_pure_table_engine_cluster = ( instance.query( f""" SELECT * FROM {TABLE_NAME} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_pure_table_engine_cluster, ) .strip() .split() ) # Simple size check assert len(select_regular) == 600 assert len(select_cluster) == 600 assert len(select_cluster_alt_syntax) == 600 > assert len(select_cluster_table_engine) == 600 E AssertionError: assert 1800 == 600 E + where 1800 = len(['0', '1', '1', '2', '2', '3', ...]) test_storage_iceberg/test.py:747: AssertionError ----------------------------- Captured stdout call ----------------------------- 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:01 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} ----------------------------- Captured stderr call ----------------------------- Command to send: c o50 sc e Answer received: !yro178 Command to send: c o178 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro179 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo180 Command to send: c o180 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro180 e Answer received: !yro181 Command to send: c o179 toDF ro181 e Answer received: !yro182 Command to send: c o50 sc e Answer received: !yro183 Command to send: c o183 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro184 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo185 Command to send: c o185 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro185 e Answer received: !yro186 Command to send: c o184 toDF ro186 e Answer received: !yro187 Command to send: c o187 apply sb e Answer received: !yro188 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro189 Command to send: c o189 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro190 Command to send: c o190 get e Answer received: !yro191 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro192 Command to send: i java.util.HashMap e Answer received: !yao193 Command to send: c o192 applyModifiableSettings ro191 ro193 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro194 Command to send: c o188 cast ro194 e Answer received: !yro195 Command to send: c o187 withColumn sb ro195 e Answer received: !yro196 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro197 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro198 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo199 Command to send: c o199 add ro198 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro199 e Answer received: !yro200 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro200 e Answer received: !yro201 Command to send: c o197 over ro201 e Answer received: !yro202 Command to send: c o182 withColumn srow_index ro202 e Answer received: !yro203 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro204 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro205 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo206 Command to send: c o206 add ro205 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro206 e Answer received: !yro207 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro207 e Answer received: !yro208 Command to send: c o204 over ro208 e Answer received: !yro209 Command to send: c o196 withColumn srow_index ro209 e Answer received: !yro210 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo211 Command to send: c o211 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro211 e Answer received: !yro212 Command to send: c o203 join ro210 ro212 sinner e Answer received: !yro213 Command to send: c o213 drop srow_index e Answer received: !yro214 Command to send: c o214 writeTo stest_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f e Answer received: !yro215 Command to send: c o215 tableProperty sformat-version s2 e Answer received: !yro216 Command to send: c o215 using siceberg e Answer received: !yro217 Command to send: c o215 create e Answer received: !yv Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96bc834-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x21025607BA9E9E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96bc834-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c37d510a-512f-4859-9f31-a35495a51e05' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro' Request method: 'PUT' Request headers: 'Content-Length': '4293' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96d1874-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x22C707E66DC8C80"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96d1874-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '07085ba6-e2cd-4f15-b289-c413f00db0ad' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6708' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96e16a2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25153593AA381A0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96e16a2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '45ea37a0-8d46-4c60-ad29-6fbb44a68a39' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96f0ce2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x228DEF9DF377D40"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96f0ce2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ce42782e-b293-4718-beab-95470927ec82' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '1941' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96ff7ba-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1DF7214A882C230"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96ff7ba-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '51eced87-5edb-4de1-8d7d-a37ab3377371' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro218 Command to send: c o218 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro219 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo220 Command to send: c o220 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro220 e Answer received: !yro221 Command to send: c o219 toDF ro221 e Answer received: !yro222 Command to send: c o50 sc e Answer received: !yro223 Command to send: c o223 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro224 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo225 Command to send: c o225 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro225 e Answer received: !yro226 Command to send: c o224 toDF ro226 e Answer received: !yro227 Command to send: c o227 apply sb e Answer received: !yro228 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro229 Command to send: c o229 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro230 Command to send: c o230 get e Answer received: !yro231 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro232 Command to send: i java.util.HashMap e Answer received: !yao233 Command to send: c o232 applyModifiableSettings ro231 ro233 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro234 Command to send: c o228 cast ro234 e Answer received: !yro235 Command to send: c o227 withColumn sb ro235 e Answer received: !yro236 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro237 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro238 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo239 Command to send: c o239 add ro238 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro239 e Answer received: !yro240 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro240 e Answer received: !yro241 Command to send: c o237 over ro241 e Answer received: !yro242 Command to send: c o222 withColumn srow_index ro242 e Answer received: !yro243 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro244 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro245 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo246 Command to send: c o246 add ro245 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro246 e Answer received: !yro247 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro247 e Answer received: !yro248 Command to send: c o244 over ro248 e Answer received: !yro249 Command to send: c o236 withColumn srow_index ro249 e Answer received: !yro250 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo251 Command to send: c o251 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro251 e Answer received: !yro252 Command to send: c o243 join ro250 ro252 sinner e Answer received: !yro253 Command to send: c o253 drop srow_index e Answer received: !yro254 Command to send: c o254 writeTo stest_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f e Answer received: !yro255 Command to send: c o255 append e Answer received: !yv Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a62ac4-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2134CA27E6ECD00"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a62ac4-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '7e60bf31-aa44-4257-ad83-f48fcc20ce68' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a77af0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2522E3652A2E7E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a77af0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '6c569448-919d-42d3-aa1c-d7d8f4f819aa' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3018' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a88814-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1C333A6CF0FA1D0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a88814-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '47707747-1eac-433d-bb3d-00fa2341fa7e' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro' Request method: 'PUT' Request headers: 'Content-Length': '4368' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a99312-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1DA756055758E50"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a99312-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '96cc654b-d073-4435-a310-b9c0e9baf661' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro' Request method: 'PUT' Request headers: 'Content-Length': '4293' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ab49e6-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x215CA31A80155E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ab49e6-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ebaeafa1-dafd-4fd2-b216-a13e13d848b5' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6708' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9aca5ca-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25F55390550AD80"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9aca5ca-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c5d3b0b4-8e06-47e8-995b-ffa4a927ca3c' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ad94bc-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1DDC44888BCFC70"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ad94bc-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '03ffea26-4d53-4be6-87f7-962d3f4e287f' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6710' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ae636a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2677F87631060E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ae636a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'eb7af4fd-11c3-43b2-9bcf-91e65ec14686' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '1941' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9af3420-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x23BF6EE1ECE1FA0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9af3420-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f7544bfc-63bf-452d-83c6-5c5d882a1318' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro256 Command to send: c o256 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro257 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo258 Command to send: c o258 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro258 e Answer received: !yro259 Command to send: c o257 toDF ro259 e Answer received: !yro260 Command to send: c o50 sc e Answer received: !yro261 Command to send: c o261 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro262 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo263 Command to send: c o263 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro263 e Answer received: !yro264 Command to send: c o262 toDF ro264 e Answer received: !yro265 Command to send: c o265 apply sb e Answer received: !yro266 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro267 Command to send: c o267 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro268 Command to send: c o268 get e Answer received: !yro269 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro270 Command to send: i java.util.HashMap e Answer received: !yao271 Command to send: c o270 applyModifiableSettings ro269 ro271 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro272 Command to send: c o266 cast ro272 e Answer received: !yro273 Command to send: c o265 withColumn sb ro273 e Answer received: !yro274 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro275 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro276 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo277 Command to send: c o277 add ro276 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro277 e Answer received: !yro278 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro278 e Answer received: !yro279 Command to send: c o275 over ro279 e Answer received: !yro280 Command to send: c o260 withColumn srow_index ro280 e Answer received: !yro281 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro282 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro283 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo284 Command to send: c o284 add ro283 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro284 e Answer received: !yro285 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro285 e Answer received: !yro286 Command to send: c o282 over ro286 e Answer received: !yro287 Command to send: c o274 withColumn srow_index ro287 e Answer received: !yro288 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo289 Command to send: c o289 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro289 e Answer received: !yro290 Command to send: c o281 join ro288 ro290 sinner e Answer received: !yro291 Command to send: c o291 drop srow_index e Answer received: !yro292 Command to send: c o292 writeTo stest_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f e Answer received: !yro293 Command to send: c o293 append e Command to send: m d o180 e Answer received: !yv Command to send: m d o185 e Answer received: !yv Command to send: m d o193 e Answer received: !yv Command to send: m d o199 e Answer received: !yv Command to send: m d o206 e Answer received: !yv Command to send: m d o211 e Answer received: !yv Command to send: m d o178 e Answer received: !yv Command to send: m d o179 e Answer received: !yv Command to send: m d o181 e Answer received: !yv Command to send: m d o182 e Answer received: !yv Command to send: m d o183 e Answer received: !yv Command to send: m d o184 e Answer received: !yv Command to send: m d o186 e Answer received: !yv Command to send: m d o187 e Answer received: !yv Command to send: m d o188 e Answer received: !yv Command to send: m d o189 e Answer received: !yv Command to send: m d o190 e Answer received: !yv Command to send: m d o192 e Answer received: !yv Command to send: m d o194 e Answer received: !yv Command to send: m d o195 e Answer received: !yv Command to send: m d o196 e Answer received: !yv Command to send: m d o197 e Answer received: !yv Command to send: m d o198 e Answer received: !yv Command to send: m d o200 e Answer received: !yv Command to send: m d o201 e Answer received: !yv Command to send: m d o202 e Answer received: !yv Command to send: m d o203 e Answer received: !yv Command to send: m d o204 e Answer received: !yv Command to send: m d o205 e Answer received: !yv Command to send: m d o207 e Answer received: !yv Command to send: m d o208 e Answer received: !yv Command to send: m d o209 e Answer received: !yv Command to send: m d o210 e Answer received: !yv Command to send: m d o212 e Answer received: !yv Command to send: m d o213 e Answer received: !yv Command to send: m d o216 e Answer received: !yv Command to send: m d o217 e Answer received: !yv Command to send: m d o220 e Answer received: !yv Command to send: m d o225 e Answer received: !yv Command to send: m d o233 e Answer received: !yv Command to send: m d o239 e Answer received: !yv Command to send: m d o246 e Answer received: !yv Command to send: m d o251 e Answer received: !yv Command to send: m d o218 e Answer received: !yv Command to send: m d o219 e Answer received: !yv Command to send: m d o221 e Answer received: !yv Command to send: m d o222 e Answer received: !yv Command to send: m d o223 e Answer received: !yv Command to send: m d o224 e Answer received: !yv Command to send: m d o226 e Answer received: !yv Command to send: m d o227 e Answer received: !yv Command to send: m d o228 e Answer received: !yv Command to send: m d o229 e Answer received: !yv Command to send: m d o230 e Answer received: !yv Command to send: m d o232 e Answer received: !yv Command to send: m d o234 e Answer received: !yv Command to send: m d o235 e Answer received: !yv Command to send: m d o236 e Answer received: !yv Command to send: m d o237 e Answer received: !yv Command to send: m d o238 e Answer received: !yv Command to send: m d o240 e Answer received: !yv Command to send: m d o241 e Answer received: !yv Command to send: m d o242 e Answer received: !yv Command to send: m d o243 e Answer received: !yv Command to send: m d o244 e Answer received: !yv Command to send: m d o245 e Answer received: !yv Command to send: m d o247 e Answer received: !yv Command to send: m d o248 e Answer received: !yv Command to send: m d o249 e Answer received: !yv Command to send: m d o250 e Answer received: !yv Command to send: m d o252 e Answer received: !yv Command to send: m d o253 e Answer received: !yv Command to send: m d o258 e Answer received: !yv Command to send: m d o263 e Answer received: !yv Command to send: m d o271 e Answer received: !yv Command to send: m d o277 e Answer received: !yv Command to send: m d o284 e Answer received: !yv Command to send: m d o256 e Answer received: !yv Command to send: m d o257 e Answer received: !yv Command to send: m d o259 e Answer received: !yv Command to send: m d o260 e Answer received: !yv Command to send: m d o261 e Answer received: !yv Command to send: m d o262 e Answer received: !yv Command to send: m d o264 e Answer received: !yv Command to send: m d o265 e Answer received: !yv Command to send: m d o266 e Answer received: !yv Command to send: m d o267 e Answer received: !yv Command to send: m d o268 e Answer received: !yv Command to send: m d o270 e Answer received: !yv Command to send: m d o272 e Answer received: !yv Command to send: m d o273 e Answer received: !yv Command to send: m d o275 e Answer received: !yv Command to send: m d o276 e Answer received: !yv Command to send: m d o278 e Answer received: !yv Command to send: m d o279 e Answer received: !yv Command to send: m d o280 e Answer received: !yv Command to send: m d o282 e Answer received: !yv Command to send: m d o283 e Answer received: !yv Command to send: m d o285 e Answer received: !yv Command to send: m d o286 e Answer received: !yv Command to send: m d o289 e Answer received: !yv Answer received: !yv Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9d9627c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x269A772216A42A0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9d9627c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'a5c26ae3-9089-4259-86c3-4423054db1c9' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dad094-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1FB6ABA2E6768C0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dad094-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f32e681f-629c-485e-a3f7-9f46dcee5152' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dbef06-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1C5455D7102EFE0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dbef06-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '08034de1-3d63-45f2-ad47-4c74e3741f18' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3018' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dcf126-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1B33E328DF06D60"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dcf126-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'edbf2b8a-f775-48cb-a57f-f454d9d41e26' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro' Request method: 'PUT' Request headers: 'Content-Length': '4414' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ddfdc8-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x24C550EE7FE4A60"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ddfdc8-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c623e1cc-6cdb-47cb-81d3-52e1650b2299' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro' Request method: 'PUT' Request headers: 'Content-Length': '4368' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9def0e8-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1D0DFE22C42DF90"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9def0e8-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'e7e67df2-a387-4a82-b236-5dc05267b9c7' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6709' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dfdf3a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x255C47CE2869E00"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dfdf3a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '602a2881-cade-4503-863d-e09c7ab86b04' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro' Request method: 'PUT' Request headers: 'Content-Length': '4293' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e0c86e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1EC7C8B3A349170"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e0c86e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '582cbe23-76b1-4fb8-83fe-1c0f8ab7b6a7' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6708' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e1b922-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x247B12B0411E600"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e1b922-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '0e85a101-16f7-4a96-879c-c0d9843b8643' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e2b2e6-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x254FE3DE6F491E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e2b2e6-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '22bc7466-dd86-44c8-96eb-16ae81a17413' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6710' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e3aade-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x22D5CCD97060980"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e3aade-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ff36da98-28b7-4204-8961-40ea5be410c3' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '1941' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e4a1be-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1EC3858E3C02B90"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e4a1be-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '4c47e1a5-f565-4aa1-8403-6d1e2678b75b' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '4096' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e59826-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json HTTP/1.1" 201 0 Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2252DBD33570900"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e59826-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '390f03b3-b2fa-48a2-ad34-aead02e003a0' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json'] Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json'] Executing query SELECT * FROM system.clusters on node1 Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) on node1 Executing query SELECT * FROM icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) on node1 Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) SETTINGS object_storage_cluster='cluster_simple' on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f; CREATE TABLE test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) SETTINGS object_storage_cluster = 'cluster_simple' on node1 Executing query SELECT * FROM test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f on node1 Executing query SELECT * FROM remote('node2', icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) ) on node1 Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f` SYNC on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f; CREATE TABLE test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) on node1 Command to send: m d o288 e Answer received: !yv Command to send: m d o290 e Answer received: !yv Command to send: m d o291 e Answer received: !yv Command to send: m d o292 e Answer received: !yv Command to send: m d o293 e Answer received: !yv Executing query SELECT * FROM test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f on node1 Executing query SELECT * FROM test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f SETTINGS object_storage_cluster='cluster_simple' on node1 ------------------------------ Captured log call ------------------------------- 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro178 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o178 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro179 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ylo180 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o180 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro180 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro181 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o179 toDF ro181 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro182 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro183 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o183 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro184 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ylo185 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o185 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro185 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro186 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o184 toDF ro186 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro187 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o187 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro188 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro189 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o189 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro190 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o190 get e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro191 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro192 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yao193 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o192 applyModifiableSettings ro191 ro193 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro194 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o188 cast ro194 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro195 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o187 withColumn sb ro195 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro196 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro197 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro198 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ylo199 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o199 add ro198 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro199 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro200 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro200 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro201 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o197 over ro201 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro202 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o182 withColumn srow_index ro202 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro203 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro204 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro205 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ylo206 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o206 add ro205 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro206 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro207 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro207 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro208 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o204 over ro208 e (clientserver.py:501, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Answer received: !yro209 (clientserver.py:512, send_command) 2025-04-04 18:15:00 [ 670 ] DEBUG : Command to send: c o196 withColumn srow_index ro209 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro210 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo211 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o211 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro211 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro212 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o203 join ro210 ro212 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro213 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o213 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro214 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o214 writeTo stest_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro215 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o215 tableProperty sformat-version s2 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro216 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o215 using siceberg e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro217 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o215 create e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96bc834-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x21025607BA9E9E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96bc834-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c37d510a-512f-4859-9f31-a35495a51e05' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro' Request method: 'PUT' Request headers: 'Content-Length': '4293' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96d1874-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x22C707E66DC8C80"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96d1874-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '07085ba6-e2cd-4f15-b289-c413f00db0ad' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6708' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96e16a2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25153593AA381A0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96e16a2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '45ea37a0-8d46-4c60-ad29-6fbb44a68a39' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96f0ce2-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x228DEF9DF377D40"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96f0ce2-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ce42782e-b293-4718-beab-95470927ec82' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '1941' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b96ff7ba-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1DF7214A882C230"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b96ff7ba-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '51eced87-5edb-4de1-8d7d-a37ab3377371' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro218 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o218 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro219 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo220 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o220 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro220 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro221 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o219 toDF ro221 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro222 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro223 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o223 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro224 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo225 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o225 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro225 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro226 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o224 toDF ro226 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro227 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o227 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro228 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro229 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o229 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro230 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o230 get e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro231 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro232 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yao233 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o232 applyModifiableSettings ro231 ro233 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro234 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o228 cast ro234 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro235 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o227 withColumn sb ro235 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro236 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro237 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro238 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo239 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o239 add ro238 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro239 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro240 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro240 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro241 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o237 over ro241 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro242 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o222 withColumn srow_index ro242 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro243 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro244 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro245 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo246 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o246 add ro245 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro246 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro247 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro247 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro248 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o244 over ro248 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro249 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o236 withColumn srow_index ro249 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro250 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo251 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o251 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro251 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro252 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o243 join ro250 ro252 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro253 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o253 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro254 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o254 writeTo stest_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro255 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o255 append e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a62ac4-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2134CA27E6ECD00"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a62ac4-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '7e60bf31-aa44-4257-ad83-f48fcc20ce68' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a77af0-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2522E3652A2E7E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a77af0-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '6c569448-919d-42d3-aa1c-d7d8f4f819aa' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3018' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a88814-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1C333A6CF0FA1D0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a88814-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '47707747-1eac-433d-bb3d-00fa2341fa7e' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro' Request method: 'PUT' Request headers: 'Content-Length': '4368' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9a99312-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1DA756055758E50"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9a99312-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '96cc654b-d073-4435-a310-b9c0e9baf661' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro' Request method: 'PUT' Request headers: 'Content-Length': '4293' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ab49e6-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x215CA31A80155E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ab49e6-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ebaeafa1-dafd-4fd2-b216-a13e13d848b5' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6708' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9aca5ca-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x25F55390550AD80"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9aca5ca-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c5d3b0b4-8e06-47e8-995b-ffa4a927ca3c' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ad94bc-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1DDC44888BCFC70"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ad94bc-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '03ffea26-4d53-4be6-87f7-962d3f4e287f' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6710' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ae636a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2677F87631060E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ae636a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'eb7af4fd-11c3-43b2-9bcf-91e65ec14686' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '1941' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9af3420-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:01 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:01 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x23BF6EE1ECE1FA0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:01 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9af3420-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f7544bfc-63bf-452d-83c6-5c5d882a1318' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:01 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:01 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro256 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o256 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro257 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo258 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o258 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro258 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro259 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o257 toDF ro259 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro260 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro261 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o261 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro262 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo263 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o263 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro263 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro264 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o262 toDF ro264 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro265 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o265 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro266 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro267 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o267 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro268 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o268 get e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro269 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro270 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yao271 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o270 applyModifiableSettings ro269 ro271 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro272 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o266 cast ro272 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro273 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o265 withColumn sb ro273 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro274 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro275 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro276 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo277 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o277 add ro276 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro277 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro278 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro278 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro279 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o275 over ro279 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro280 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o260 withColumn srow_index ro280 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro281 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro282 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro283 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo284 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o284 add ro283 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro284 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro285 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro285 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro286 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o282 over ro286 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro287 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o274 withColumn srow_index ro287 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro288 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ylo289 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o289 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro289 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro290 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o281 join ro288 ro290 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro291 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o291 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro292 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o292 writeTo stest_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yro293 (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: c o293 append e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o180 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o185 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o193 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o199 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o206 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o211 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o178 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o179 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o181 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o182 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o183 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o184 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o186 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o187 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o188 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o189 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o190 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o192 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o194 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o195 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o196 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o197 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o198 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o200 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o201 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o202 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o203 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o204 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o205 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o207 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o208 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o209 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o210 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o212 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o213 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o216 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o217 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o220 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o225 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o233 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o239 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o246 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o251 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o218 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o219 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o221 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o222 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o223 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o224 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o226 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o227 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o228 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o229 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o230 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o232 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o234 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o235 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o236 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o237 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o238 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o240 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o241 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o242 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o243 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o244 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o245 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o247 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o248 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o249 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o250 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o252 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o253 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o258 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o263 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o271 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o277 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o284 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o256 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o257 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o259 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o260 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o261 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o262 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o264 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o265 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o266 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o267 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o268 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o270 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o272 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o273 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o275 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o276 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o278 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o279 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o280 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o282 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o283 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o285 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o286 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Command to send: m d o289 e (clientserver.py:501, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:01 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9d9627c-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x269A772216A42A0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9d9627c-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'a5c26ae3-9089-4259-86c3-4423054db1c9' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dad094-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1FB6ABA2E6768C0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dad094-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'f32e681f-629c-485e-a3f7-9f46dcee5152' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet' Request method: 'PUT' Request headers: 'Content-Length': '967' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dbef06-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1C5455D7102EFE0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dbef06-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '08034de1-3d63-45f2-ad47-4c74e3741f18' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '3018' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dcf126-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1B33E328DF06D60"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dcf126-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'edbf2b8a-f775-48cb-a57f-f454d9d41e26' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro' Request method: 'PUT' Request headers: 'Content-Length': '4414' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9ddfdc8-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x24C550EE7FE4A60"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9ddfdc8-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'c623e1cc-6cdb-47cb-81d3-52e1650b2299' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro' Request method: 'PUT' Request headers: 'Content-Length': '4368' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9def0e8-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1D0DFE22C42DF90"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9def0e8-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'e7e67df2-a387-4a82-b236-5dc05267b9c7' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6709' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9dfdf3a-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x255C47CE2869E00"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9dfdf3a-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '602a2881-cade-4503-863d-e09c7ab86b04' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro' Request method: 'PUT' Request headers: 'Content-Length': '4293' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e0c86e-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1EC7C8B3A349170"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e0c86e-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '582cbe23-76b1-4fb8-83fe-1c0f8ab7b6a7' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6708' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e1b922-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x247B12B0411E600"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e1b922-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '0e85a101-16f7-4a96-879c-c0d9843b8643' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text' Request method: 'PUT' Request headers: 'Content-Length': '1' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e2b2e6-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x254FE3DE6F491E0"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e2b2e6-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '22bc7466-dd86-44c8-96eb-16ae81a17413' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro' Request method: 'PUT' Request headers: 'Content-Length': '6710' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e3aade-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x22D5CCD97060980"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e3aade-1180-11f0-918b-0242ac110002' 'x-ms-request-id': 'ff36da98-28b7-4204-8961-40ea5be410c3' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '1941' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e4a1be-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x1EC3858E3C02B90"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e4a1be-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '4c47e1a5-f565-4aa1-8403-6d1e2678b75b' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Request URL: 'http://127.0.0.1:30000/devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json' Request method: 'PUT' Request headers: 'Content-Length': '4096' 'x-ms-blob-type': 'REDACTED' 'x-ms-version': 'REDACTED' 'Content-Type': 'application/octet-stream' 'Accept': 'application/xml' 'User-Agent': 'azsdk-python-storage-blob/12.19.0 Python/3.10.12 (Linux-5.15.0-130-generic-x86_64-with-glibc2.35)' 'x-ms-date': 'REDACTED' 'x-ms-client-request-id': 'b9e59826-1180-11f0-918b-0242ac110002' 'Authorization': 'REDACTED' A body is sent with the request (_universal.py:511, on_request) 2025-04-04 18:15:02 [ 670 ] DEBUG : http://127.0.0.1:30000 "PUT /devstoreaccount1/mycontainer//iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:02 [ 670 ] INFO : Response status: 201 Response headers: 'Server': 'Azurite-Blob/3.34.0' 'etag': '"0x2252DBD33570900"' 'last-modified': 'Fri, 04 Apr 2025 18:15:02 GMT' 'content-md5': 'REDACTED' 'x-ms-client-request-id': 'b9e59826-1180-11f0-918b-0242ac110002' 'x-ms-request-id': '390f03b3-b2fa-48a2-ad34-aead02e003a0' 'x-ms-version': 'REDACTED' 'date': 'Fri, 04 Apr 2025 18:15:02 GMT' 'x-ms-request-server-encrypted': 'REDACTED' 'Connection': 'keep-alive' 'Keep-Alive': 'REDACTED' 'Content-Length': '0' (_universal.py:550, on_response) 2025-04-04 18:15:02 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:02 [ 670 ] INFO : Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-11-09ed86da-ab41-4766-9ce5-114fac402bbb-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-9-785ff79c-df2b-447c-b715-d3b23a5e9e34-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/data/00000-7-300670c2-cd0c-4ee4-aad2-a8b5a1b98d62-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-5160775052120443169-1-b0e304ce-0809-419b-96f8-0f14f0227e1d.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8617900314050178602-1-aeb16799-dddb-4d66-96c3-6d911dfafa12.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/b0e304ce-0809-419b-96f8-0f14f0227e1d-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/snap-8956046916379606865-1-97c98e73-bce7-4843-95bb-032c5640583a.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/97c98e73-bce7-4843-95bb-032c5640583a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/aeb16799-dddb-4d66-96c3-6d911dfafa12-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/metadata/v3.metadata.json'] (test.py:653, test_cluster_table_function) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query SELECT * FROM system.clusters on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] INFO : Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N (test.py:657, test_cluster_table_function) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query SELECT * FROM icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query SELECT * FROM icebergAzure(azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f; CREATE TABLE test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) SETTINGS object_storage_cluster = 'cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query SELECT * FROM remote('node2', icebergAzureCluster('cluster_simple', azure, container = 'mycontainer', storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) ) on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f` SYNC on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f; CREATE TABLE test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f ENGINE=IcebergAzure(azure, container = mycontainer, storage_account_url = 'http://azurite1:30000/devstoreaccount1', blob_path = '/iceberg_data/default/test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f/', format=Parquet) on node1 (cluster.py:3677, query) 2025-04-04 18:15:02 [ 670 ] DEBUG : Command to send: m d o288 e (clientserver.py:501, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Command to send: m d o290 e (clientserver.py:501, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Command to send: m d o291 e (clientserver.py:501, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Command to send: m d o292 e (clientserver.py:501, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Command to send: m d o293 e (clientserver.py:501, send_command) 2025-04-04 18:15:02 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f on node1 (cluster.py:3677, query) 2025-04-04 18:15:03 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_2_azure_77ab2414_a490_4844_a7ee_0843cc4e854f SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) _____________________ test_cluster_table_function[hdfs-1] ______________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = format_version = '1', storage_type = 'hdfs' @pytest.mark.parametrize("format_version", ["1", "2"]) @pytest.mark.parametrize("storage_type", ["s3", "azure", "hdfs"]) def test_cluster_table_function(started_cluster, format_version, storage_type): if is_arm() and storage_type == "hdfs": pytest.skip("Disabled test IcebergHDFS for aarch64") instance = started_cluster.instances["node1"] spark = started_cluster.spark_session TABLE_NAME = ( "test_iceberg_cluster_" + format_version + "_" + storage_type + "_" + get_uuid_str() ) def add_df(mode): write_iceberg_from_df( spark, generate_data(spark, 0, 100), TABLE_NAME, mode=mode, format_version=format_version, ) files = default_upload_directory( started_cluster, storage_type, f"/iceberg_data/default/{TABLE_NAME}/", f"/iceberg_data/default/{TABLE_NAME}/", ) logging.info(f"Adding another dataframe. result files: {files}") return files files = add_df(mode="overwrite") for i in range(1, len(started_cluster.instances)): files = add_df(mode="append") logging.info(f"Setup complete. files: {files}") assert len(files) == 5 + 4 * (len(started_cluster.instances) - 1) clusters = instance.query(f"SELECT * FROM system.clusters") logging.info(f"Clusters setup: {clusters}") # Regular Query only node1 table_function_expr = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True ) select_regular = ( instance.query(f"SELECT * FROM {table_function_expr}").strip().split() ) # Cluster Query with node1 as coordinator table_function_expr_cluster = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True, run_on_cluster=True, ) query_id_cluster = str(uuid.uuid4()) select_cluster = ( instance.query( f"SELECT * FROM {table_function_expr_cluster}", query_id=query_id_cluster ) .strip() .split() ) # Cluster Query with node1 as coordinator with alternative syntax query_id_cluster_alt_syntax = str(uuid.uuid4()) select_cluster_alt_syntax = ( instance.query( f""" SELECT * FROM {table_function_expr} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_cluster_alt_syntax, ) .strip() .split() ) create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster, object_storage_cluster='cluster_simple') query_id_cluster_table_engine = str(uuid.uuid4()) select_cluster_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_cluster_table_engine, ) .strip() .split() ) select_remote_cluster = ( instance.query(f"SELECT * FROM remote('node2',{table_function_expr_cluster})") .strip() .split() ) instance.query(f"DROP TABLE IF EXISTS `{TABLE_NAME}` SYNC") create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster) query_id_pure_table_engine = str(uuid.uuid4()) select_pure_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_pure_table_engine, ) .strip() .split() ) query_id_pure_table_engine_cluster = str(uuid.uuid4()) select_pure_table_engine_cluster = ( instance.query( f""" SELECT * FROM {TABLE_NAME} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_pure_table_engine_cluster, ) .strip() .split() ) # Simple size check assert len(select_regular) == 600 assert len(select_cluster) == 600 assert len(select_cluster_alt_syntax) == 600 > assert len(select_cluster_table_engine) == 600 E AssertionError: assert 1800 == 600 E + where 1800 = len(['0', '1', '1', '2', '2', '3', ...]) test_storage_iceberg/test.py:747: AssertionError ----------------------------- Captured stdout call ----------------------------- 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:03 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:15:04 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:04 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:04 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:04 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:04 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:04 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} ----------------------------- Captured stderr call ----------------------------- Command to send: c o50 sc e Answer received: !yro294 Command to send: c o294 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro295 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo296 Command to send: c o296 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro296 e Answer received: !yro297 Command to send: c o295 toDF ro297 e Answer received: !yro298 Command to send: c o50 sc e Answer received: !yro299 Command to send: c o299 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro300 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo301 Command to send: c o301 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro301 e Answer received: !yro302 Command to send: c o300 toDF ro302 e Answer received: !yro303 Command to send: c o303 apply sb e Answer received: !yro304 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro305 Command to send: c o305 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro306 Command to send: c o306 get e Answer received: !yro307 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro308 Command to send: i java.util.HashMap e Answer received: !yao309 Command to send: c o308 applyModifiableSettings ro307 ro309 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro310 Command to send: c o304 cast ro310 e Answer received: !yro311 Command to send: c o303 withColumn sb ro311 e Answer received: !yro312 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro313 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro314 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo315 Command to send: c o315 add ro314 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro315 e Answer received: !yro316 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro316 e Answer received: !yro317 Command to send: c o313 over ro317 e Answer received: !yro318 Command to send: c o298 withColumn srow_index ro318 e Answer received: !yro319 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro320 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro321 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo322 Command to send: c o322 add ro321 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro322 e Answer received: !yro323 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro323 e Answer received: !yro324 Command to send: c o320 over ro324 e Answer received: !yro325 Command to send: c o312 withColumn srow_index ro325 e Answer received: !yro326 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo327 Command to send: c o327 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro327 e Answer received: !yro328 Command to send: c o319 join ro326 ro328 sinner e Answer received: !yro329 Command to send: c o329 drop srow_index e Answer received: !yro330 Command to send: c o330 writeTo stest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 e Answer received: !yro331 Command to send: c o331 tableProperty sformat-version s1 e Answer received: !yro332 Command to send: c o331 using siceberg e Answer received: !yro333 Command to send: c o331 create e Answer received: !yv GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None MKDIRS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=MKDIRS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None MKDIRS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=MKDIRS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8276787480606260770\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id\x08null\x00\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br\x02\xac\x025\x8c\xbb\r\xc20\x14\x00\x13\xefc\xec\xc4\xcf\xb1\xbd\n\xcd\x93?\xcf\x80\x94\x08)qX\x81\x92\x12\x06\xa2`\tJZj\x1a\x10"\x12\xba\xe2\x8a\x93\xee\xc2\xc4.R\xa0q\x83\xc9\x17/\x12e?\xf7E\x14\x9a\n\xfeK\xec\xe7\xa9\xd0\x88\rnS\x9e\xd0\x85\x98[k:l\xacV\x08&)\xb4\xad\xd1\xa8#H\x90\xde4\xa9\xb5b\xa0\xe2\x97\xa3\xd2\x8e\x9c\x02\xcbM\xce\x86\x83\x03\xc3-X\xe2\x94d\xea\x88BP\x1d\xf0A\xae\xfca\xdc\x7f\xd6\x15\xbb\xbe\xde\xcf\xd3\xf9x\x7f\xd4\x8c\xb1j\xe1V\xff\xf4\x05\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7\x02\xa4\x035\x8c\xb1N\xc30\x14E\x1d\xd7\x03\x13\xed\x8f\x98\xd8\x8e\x1d;[\x19\xd8\x01\xc1\xfc\xf4b;-R\x07\x9a8{66\x18\xd9\xf8\x00\x06\xbe\x82!?\xc1\xc8\xca\xcc\x82DR\xb5w:W\xf7\xe8R\xfa\xf9\xfb\xf7\xf3\xf2\xfa\xf4\xf5\x9d}\xd0\xfc\xc1\xc7:\xb6\x1b\x08\x980\x0f\xb1\xc1~\x97\xf2\x14\xbb\x04\xa7\xc5\xef\xfa.\xc5\x16$lC\xd3AU\xfbF9[\x82t\xa6\x00mC\x01NY\x03\xc6k\xa1\x05Z\x19\x94\xcb\x0fob\x0e\x97\x05\xd7\xbeq6\x08\xc5\x8d\xaej\xae\x95B^\x17(x\x94F\x96\xb6D\x1d\x05\xf2Y\x96\x17\x8f\xd8\xee\xfb\x98\xce\xaf/oo\xee\xaf\xee\xc6\xecy9\x0c\xc3\x9a2\xfa\xbe`o\x0b2\xc1\x98\xb11\x9b\x810B\xe8\x0c+r\x0c\xa3\xf2\xd0\xfd\xa9\xb3\xaa\x9a\x1cz6y\xff\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790503606,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8276787480606260770,\n "refs" : {\n "main" : {\n "snapshot-id" : 8276787480606260770,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n } ],\n "metadata-log" : [ ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv1.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro334 Command to send: c o334 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro335 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo336 Command to send: c o336 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro336 e Answer received: !yro337 Command to send: c o335 toDF ro337 e Answer received: !yro338 Command to send: c o50 sc e Answer received: !yro339 Command to send: c o339 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro340 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo341 Command to send: c o341 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro341 e Answer received: !yro342 Command to send: c o340 toDF ro342 e Answer received: !yro343 Command to send: c o343 apply sb e Answer received: !yro344 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro345 Command to send: c o345 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro346 Command to send: c o346 get e Answer received: !yro347 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro348 Command to send: i java.util.HashMap e Answer received: !yao349 Command to send: c o348 applyModifiableSettings ro347 ro349 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro350 Command to send: c o344 cast ro350 e Answer received: !yro351 Command to send: c o343 withColumn sb ro351 e Answer received: !yro352 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro353 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro354 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo355 Command to send: c o355 add ro354 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro355 e Answer received: !yro356 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro356 e Answer received: !yro357 Command to send: c o353 over ro357 e Answer received: !yro358 Command to send: c o338 withColumn srow_index ro358 e Answer received: !yro359 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro360 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro361 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo362 Command to send: c o362 add ro361 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro362 e Answer received: !yro363 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro363 e Answer received: !yro364 Command to send: c o360 over ro364 e Answer received: !yro365 Command to send: c o352 withColumn srow_index ro365 e Answer received: !yro366 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo367 Command to send: c o367 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro367 e Answer received: !yro368 Command to send: c o359 join ro366 ro368 sinner e Answer received: !yro369 Command to send: c o369 drop srow_index e Answer received: !yro370 Command to send: c o370 writeTo stest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 e Answer received: !yro371 Command to send: c o371 append e Command to send: m d o215 e Answer received: !yv Command to send: m d o214 e Answer received: !yv Command to send: m d o191 e Answer received: !yv Command to send: m d o255 e Answer received: !yv Command to send: m d o254 e Answer received: !yv Command to send: m d o231 e Answer received: !yv Command to send: m d o287 e Answer received: !yv Command to send: m d o274 e Answer received: !yv Command to send: m d o281 e Answer received: !yv Command to send: m d o296 e Answer received: !yv Command to send: m d o301 e Answer received: !yv Command to send: m d o309 e Answer received: !yv Command to send: m d o315 e Answer received: !yv Command to send: m d o322 e Answer received: !yv Command to send: m d o327 e Answer received: !yv Command to send: m d o294 e Answer received: !yv Command to send: m d o295 e Answer received: !yv Command to send: m d o297 e Answer received: !yv Command to send: m d o298 e Answer received: !yv Command to send: m d o299 e Answer received: !yv Command to send: m d o300 e Answer received: !yv Command to send: m d o302 e Answer received: !yv Command to send: m d o303 e Answer received: !yv Command to send: m d o304 e Answer received: !yv Command to send: m d o305 e Answer received: !yv Command to send: m d o306 e Answer received: !yv Command to send: m d o308 e Answer received: !yv Command to send: m d o310 e Answer received: !yv Command to send: m d o311 e Answer received: !yv Command to send: m d o312 e Answer received: !yv Command to send: m d o313 e Answer received: !yv Command to send: m d o314 e Answer received: !yv Command to send: m d o316 e Answer received: !yv Command to send: m d o317 e Answer received: !yv Command to send: m d o318 e Answer received: !yv Command to send: m d o319 e Answer received: !yv Command to send: m d o320 e Answer received: !yv Command to send: m d o321 e Answer received: !yv Command to send: m d o323 e Answer received: !yv Command to send: m d o324 e Answer received: !yv Command to send: m d o325 e Answer received: !yv Command to send: m d o326 e Answer received: !yv Command to send: m d o328 e Answer received: !yv Command to send: m d o329 e Answer received: !yv Command to send: m d o332 e Answer received: !yv Command to send: m d o333 e Answer received: !yv Command to send: m d o336 e Answer received: !yv Command to send: m d o341 e Answer received: !yv Command to send: m d o349 e Answer received: !yv Command to send: m d o355 e Answer received: !yv Command to send: m d o334 e Answer received: !yv Command to send: m d o335 e Answer received: !yv Command to send: m d o337 e Answer received: !yv Command to send: m d o338 e Answer received: !yv Command to send: m d o339 e Answer received: !yv Command to send: m d o340 e Answer received: !yv Command to send: m d o342 e Answer received: !yv Command to send: m d o343 e Answer received: !yv Command to send: m d o344 e Answer received: !yv Command to send: m d o345 e Answer received: !yv Command to send: m d o346 e Answer received: !yv Command to send: m d o348 e Answer received: !yv Command to send: m d o350 e Answer received: !yv Command to send: m d o351 e Answer received: !yv Command to send: m d o353 e Answer received: !yv Command to send: m d o354 e Answer received: !yv Command to send: m d o356 e Answer received: !yv Command to send: m d o357 e Answer received: !yv Command to send: m d o358 e Answer received: !yv Command to send: m d o362 e Answer received: !yv Command to send: m d o367 e Answer received: !yv Answer received: !yv GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8276787480606260770\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id\x08null\x00\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br\x02\xac\x025\x8c\xbb\r\xc20\x14\x00\x13\xefc\xec\xc4\xcf\xb1\xbd\n\xcd\x93?\xcf\x80\x94\x08)qX\x81\x92\x12\x06\xa2`\tJZj\x1a\x10"\x12\xba\xe2\x8a\x93\xee\xc2\xc4.R\xa0q\x83\xc9\x17/\x12e?\xf7E\x14\x9a\n\xfeK\xec\xe7\xa9\xd0\x88\rnS\x9e\xd0\x85\x98[k:l\xacV\x08&)\xb4\xad\xd1\xa8#H\x90\xde4\xa9\xb5b\xa0\xe2\x97\xa3\xd2\x8e\x9c\x02\xcbM\xce\x86\x83\x03\xc3-X\xe2\x94d\xea\x88BP\x1d\xf0A\xae\xfca\xdc\x7f\xd6\x15\xbb\xbe\xde\xcf\xd3\xf9x\x7f\xd4\x8c\xb1j\xe1V\xff\xf4\x05\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790504083,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 1118366645057585943,\n "refs" : {\n "main" : {\n "snapshot-id" : 1118366645057585943,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 1118366645057585943,\n "parent-snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790504083,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n }, {\n "timestamp-ms" : 1743790504083,\n "snapshot-id" : 1118366645057585943\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790503606,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&1118366645057585943\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id&8276787480606260770\x00Vm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c\x04\xa0\x03\xb5\xce=JCA\x14\x05\xe0d6\xe2\n&o~\xee\xbc\xb9\xb3\x954\xc3\xfc\xdck\x84\x04\xe1\xbd\x89;\x904\x82\xa5\xa9,\\@\xfa\xf4\x16n\xc2\xd2N\xac,DP\xd4@\x1a\xfbp\x8aS\x1c8|[\xd1]\x14\xca4\x9c\xc7\x9aZ\xea*qZ/[\xd7hl\xf1\xb8\x94\xe5zl4D\x1d\x17\x95\xc7\x18ra\x83\xbe\x8f\x1a\x9d\x8d\xe0\xab\x8dh\xbc\x8b\xae\x80\x02\x95\xbc\xae\x06\xbb\x15\xb5txt^1@0\xd2\x02\xb3\x84\xdeV\x99\xb9h\xa9\xc1\x14\x8d\x065\xda$Wj\x96\xae\x86\xcb\xcf\xf9D\xecn^\xdf\xf7\x0f\xf7\xd7gB\x88\xc9!O\xd3\xbf\xda\x9e\\j]\xa0`\x01\xa5g\xf6\x12\x02x\x89\x80$\xa9\xaa\xda\x13\xe5l{8J\xbf\x7f\xa5\x8f\x1f_o\xb7w\x9b\xe7\x97\xe9?\xea\x0fVm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00N\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83\x02\xa0\x035\x8c;R\xc30\x14EeE\x05\x15\xc9\x0e\xb2\x02a\xebg=uP\xd0\x03\x03\xb5F\xd6\'a&\x05\xb1\xe5\xde\r\x1d\xc3\x12RPR\xa4g\x07^\x0c%\x0b\xc0\xce$\xb7:\xf7\xbe3\x0f\xe3\xe3\xc7\xef\xdf\xcf\xd7\xe1}}\xc4\xe5\xab\x8fMl76\xb8\xec\xca\x10\x93\xebw\xb9\xcc\xb1\xcb\xf6r\xf1\xbb\xbe\xcb\xb1\xb5\xccnC\xea\xaci|\xe2\xa0k\xcb@\t+u\x10\x16\xb8VVyY\xc9\xcai\x168\x94\xa7o\xd5\x1c\xca\x14\xd5\xdcq#\x05P\x01\x82Qi\xa0\xa6\x10\x1aM!\xd5N\xebT\x81\x9a\x96Yf7o\xae\xdd\xf71_?\xdc==\xbe\xdc?\x8f\xc5\xe7r\x18\x86[L\xf0\xf7\x82\x1c\x16h\x82\xb1 c1\x03"\x08\xe1\x19V\xe8\x1c\x82\xd9\xa9\xfbK\'\xc6L\x0e\xbe\x9a\xbc\x7fN\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F570f4492-34ff-463d-bfc1-142c1828183a-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'2', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7\x02\xa4\x035\x8c\xb1N\xc30\x14E\x1d\xd7\x03\x13\xed\x8f\x98\xd8\x8e\x1d;[\x19\xd8\x01\xc1\xfc\xf4b;-R\x07\x9a8{66\x18\xd9\xf8\x00\x06\xbe\x82!?\xc1\xc8\xca\xcc\x82DR\xb5w:W\xf7\xe8R\xfa\xf9\xfb\xf7\xf3\xf2\xfa\xf4\xf5\x9d}\xd0\xfc\xc1\xc7:\xb6\x1b\x08\x980\x0f\xb1\xc1~\x97\xf2\x14\xbb\x04\xa7\xc5\xef\xfa.\xc5\x16$lC\xd3AU\xfbF9[\x82t\xa6\x00mC\x01NY\x03\xc6k\xa1\x05Z\x19\x94\xcb\x0fob\x0e\x97\x05\xd7\xbeq6\x08\xc5\x8d\xaej\xae\x95B^\x17(x\x94F\x96\xb6D\x1d\x05\xf2Y\x96\x17\x8f\xd8\xee\xfb\x98\xce\xaf/oo\xee\xaf\xee\xc6\xecy9\x0c\xc3\x9a2\xfa\xbe`o\x0b2\xc1\x98\xb11\x9b\x810B\xe8\x0c+r\x0c\xa3\xf2\xd0\xfd\xa9\xb3\xaa\x9a\x1cz6y\xff\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790503606,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8276787480606260770,\n "refs" : {\n "main" : {\n "snapshot-id" : 8276787480606260770,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n } ],\n "metadata-log" : [ ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv1.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro372 Command to send: c o372 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro373 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo374 Command to send: c o374 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro374 e Answer received: !yro375 Command to send: c o373 toDF ro375 e Answer received: !yro376 Command to send: c o50 sc e Answer received: !yro377 Command to send: c o377 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro378 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo379 Command to send: c o379 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro379 e Answer received: !yro380 Command to send: c o378 toDF ro380 e Answer received: !yro381 Command to send: c o381 apply sb e Answer received: !yro382 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro383 Command to send: c o383 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro384 Command to send: c o384 get e Answer received: !yro385 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro386 Command to send: i java.util.HashMap e Answer received: !yao387 Command to send: c o386 applyModifiableSettings ro385 ro387 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro388 Command to send: c o382 cast ro388 e Answer received: !yro389 Command to send: c o381 withColumn sb ro389 e Answer received: !yro390 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro391 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro392 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo393 Command to send: c o393 add ro392 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro393 e Answer received: !yro394 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro394 e Answer received: !yro395 Command to send: c o391 over ro395 e Answer received: !yro396 Command to send: c o376 withColumn srow_index ro396 e Answer received: !yro397 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro398 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro399 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo400 Command to send: c o400 add ro399 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro400 e Answer received: !yro401 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro401 e Answer received: !yro402 Command to send: c o398 over ro402 e Answer received: !yro403 Command to send: c o390 withColumn srow_index ro403 e Answer received: !yro404 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo405 Command to send: c o405 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro405 e Answer received: !yro406 Command to send: c o397 join ro404 ro406 sinner e Answer received: !yro407 Command to send: c o407 drop srow_index e Answer received: !yro408 Command to send: c o408 writeTo stest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 e Answer received: !yro409 Command to send: c o409 append e Answer received: !yv GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8276787480606260770\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id\x08null\x00\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br\x02\xac\x025\x8c\xbb\r\xc20\x14\x00\x13\xefc\xec\xc4\xcf\xb1\xbd\n\xcd\x93?\xcf\x80\x94\x08)qX\x81\x92\x12\x06\xa2`\tJZj\x1a\x10"\x12\xba\xe2\x8a\x93\xee\xc2\xc4.R\xa0q\x83\xc9\x17/\x12e?\xf7E\x14\x9a\n\xfeK\xec\xe7\xa9\xd0\x88\rnS\x9e\xd0\x85\x98[k:l\xacV\x08&)\xb4\xad\xd1\xa8#H\x90\xde4\xa9\xb5b\xa0\xe2\x97\xa3\xd2\x8e\x9c\x02\xcbM\xce\x86\x83\x03\xc3-X\xe2\x94d\xea\x88BP\x1d\xf0A\xae\xfca\xdc\x7f\xd6\x15\xbb\xbe\xde\xcf\xd3\xf9x\x7f\xd4\x8c\xb1j\xe1V\xff\xf4\x05\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790504083,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 1118366645057585943,\n "refs" : {\n "main" : {\n "snapshot-id" : 1118366645057585943,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 1118366645057585943,\n "parent-snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790504083,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n }, {\n "timestamp-ms" : 1743790504083,\n "snapshot-id" : 1118366645057585943\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790503606,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&1118366645057585943\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id&8276787480606260770\x00Vm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c\x04\xa0\x03\xb5\xce=JCA\x14\x05\xe0d6\xe2\n&o~\xee\xbc\xb9\xb3\x954\xc3\xfc\xdck\x84\x04\xe1\xbd\x89;\x904\x82\xa5\xa9,\\@\xfa\xf4\x16n\xc2\xd2N\xac,DP\xd4@\x1a\xfbp\x8aS\x1c8|[\xd1]\x14\xca4\x9c\xc7\x9aZ\xea*qZ/[\xd7hl\xf1\xb8\x94\xe5zl4D\x1d\x17\x95\xc7\x18ra\x83\xbe\x8f\x1a\x9d\x8d\xe0\xab\x8dh\xbc\x8b\xae\x80\x02\x95\xbc\xae\x06\xbb\x15\xb5txt^1@0\xd2\x02\xb3\x84\xdeV\x99\xb9h\xa9\xc1\x14\x8d\x065\xda$Wj\x96\xae\x86\xcb\xcf\xf9D\xecn^\xdf\xf7\x0f\xf7\xd7gB\x88\xc9!O\xd3\xbf\xda\x9e\\j]\xa0`\x01\xa5g\xf6\x12\x02x\x89\x80$\xa9\xaa\xda\x13\xe5l{8J\xbf\x7f\xa5\x8f\x1f_o\xb7w\x9b\xe7\x97\xe9?\xea\x0fVm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8494498371458529486\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id&1118366645057585943\x00 5\xf3\xf21\xbc"\xf9mP\xc7OtuDd\x06\xf0\x03\xbd\xd0;J\x04A\x10\x80\xe1\xdd\xbe\x88\'h\xa7\x1f5\xfd\xb8\x8aI\xd3\x8f*\x15v\x11fz\xbd\x81\x98\x08\x86n \x06\x1e\xc0\xdcT\x0c\x0c\xcd\xc5D1Q12\x10a\x17\xc7\x85\x051W*\xa8\xa0\xa0\xf8\xf8\xe7\xac\xd9\xcd\x98\xb0\xdb\x0e%\xd6\xd8\x14\xa48\x9b\xd4\xa6b_\xc3\xfa\x92\'\xb3\xbeb\x17d\xd8)\xd4\x07\x9f2)gM\x90\xae\xd5\x01l\xd1\xc1)\xdb\x866\x83\x00\x11\xad,\xca5S\xacq\xf5\x91\x84\xb2>y\xc5\x01\x8a\xe7`\x8c\xe0\x11\t8\x81p\xba$cS\xd2|*6\xe3~\xb7\xb7\xdc\x1a\xb1\xd3\xc7\xbb\xe5\xf3\xed\xd5\xc3\xcb\x9816Z\xcd\xcd\xf8{\xcd\xff\x9c\xdaZA\x00\x03U\x03\xd1@\xd5\x85\'\xca\x92KPY:\xe5\xa4\xd3qM\xfd\x1c\xa8\x17G\xaf\xef\x97\xe7g\x07\x1b\xff.\xd5\xadG\xaf\xc1qKd9x\xb0\xdc\x81C\x8eE\x14\x838$5\xf03\xea\xf5\xc7\xe2\xed\xf8\xe4\xf0\xfe\xe9w\xd4/ 5\xf3\xf21\xbc"\xf9mP\xc7OtuDd', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x03H\xb2^\xec\x11_7\x06\x17\xd1\x80\x1bZ\xf98\x02\xa4\x035\x8c=R\xc30\x14\x84eE\x05\x15p\x11\xc5\x92m\xfduP\xd0\x03\x03\xf5\x1b\xe5I\x02fR\x80-\xf7>AN\xc0p\x00\n\x0eA\xe5\x92\x9e\x0e\xba\xe4\x0e\x0cv&\xd9\xea\xdb\xd9o\x96\xd2\xd7\xdf\xef\xbf\xed\xd7\xe7\xcf\xae\xf8\xa0\xe5\x13\xc6Ul\x1f \xf8\xec\xcb\x10\x93\xef\xd7\xb9\xcc\xb1\xcbp\\p\xddw9\xb6 \xe11\xa4\x0e\xdc\nSe\x8d\x06iU\r\x8d\t5\xd8\xca(P\xd8\x88Fx#Ce\xcb\xfd\x9b\x98\xc3\xa5\xe1IK-\x9c@\xaek\x8b\xbcI\xc9s\xa7\x8d\xe6:h\xa5\xb1R\x16E\xcdgY.\x9f}\xfb\xd2\xc7|z}y{s\x7fu7\x16\x9b\xb3a\x18.(\xa3\xef\x0b\xf6\xb6 \x13\x8c\x05\x1b\x8b\x19\x08#\x84\xcepN\x0eaT\xee;\x1e;snr\xe8\xc9\xe4\xfd\x03\x03H\xb2^\xec\x11_7\x06\x17\xd1\x80\x1bZ\xf98', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Ff0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00N\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83\x02\xa0\x035\x8c;R\xc30\x14EeE\x05\x15\xc9\x0e\xb2\x02a\xebg=uP\xd0\x03\x03\xb5F\xd6\'a&\x05\xb1\xe5\xde\r\x1d\xc3\x12RPR\xa4g\x07^\x0c%\x0b\xc0\xce$\xb7:\xf7\xbe3\x0f\xe3\xe3\xc7\xef\xdf\xcf\xd7\xe1}}\xc4\xe5\xab\x8fMl76\xb8\xec\xca\x10\x93\xebw\xb9\xcc\xb1\xcb\xf6r\xf1\xbb\xbe\xcb\xb1\xb5\xccnC\xea\xaci|\xe2\xa0k\xcb@\t+u\x10\x16\xb8VVyY\xc9\xcai\x168\x94\xa7o\xd5\x1c\xca\x14\xd5\xdcq#\x05P\x01\x82Qi\xa0\xa6\x10\x1aM!\xd5N\xebT\x81\x9a\x96Yf7o\xae\xdd\xf71_?\xdc==\xbe\xdc?\x8f\xc5\xe7r\x18\x86[L\xf0\xf7\x82\x1c\x16h\x82\xb1 c1\x03"\x08\xe1\x19V\xe8\x1c\x82\xd9\xa9\xfbK\'\xc6L\x0e\xbe\x9a\xbc\x7fN\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F570f4492-34ff-463d-bfc1-142c1828183a-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'3', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 Command to send: m d o363 e Answer received: !yv Command to send: m d o364 e Answer received: !yv Command to send: m d o365 e Answer received: !yv Command to send: m d o366 e Answer received: !yv Command to send: m d o368 e Answer received: !yv Command to send: m d o369 e Answer received: !yv Command to send: m d o370 e Answer received: !yv Command to send: m d o371 e Answer received: !yv Command to send: m d o374 e Answer received: !yv Command to send: m d o379 e Answer received: !yv Command to send: m d o387 e Answer received: !yv Command to send: m d o393 e Answer received: !yv Command to send: m d o400 e Answer received: !yv Command to send: m d o405 e Answer received: !yv Command to send: m d o372 e Answer received: !yv Command to send: m d o373 e Answer received: !yv Command to send: m d o375 e Answer received: !yv Command to send: m d o376 e Answer received: !yv Command to send: m d o377 e Answer received: !yv Command to send: m d o378 e Answer received: !yv Command to send: m d o380 e Answer received: !yv Command to send: m d o381 e Answer received: !yv Command to send: m d o382 e Answer received: !yv Command to send: m d o383 e Answer received: !yv Command to send: m d o384 e Answer received: !yv Command to send: m d o386 e Answer received: !yv Command to send: m d o388 e Answer received: !yv Command to send: m d o389 e Answer received: !yv Command to send: m d o390 e Answer received: !yv Command to send: m d o391 e Answer received: !yv Command to send: m d o392 e Answer received: !yv Command to send: m d o394 e Answer received: !yv Command to send: m d o395 e Answer received: !yv Command to send: m d o396 e Answer received: !yv Command to send: m d o398 e Answer received: !yv Command to send: m d o399 e Answer received: !yv Command to send: m d o401 e Answer received: !yv Command to send: m d o402 e Answer received: !yv Command to send: m d o403 e Answer received: !yv Command to send: m d o406 e Answer received: !yv http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7\x02\xa4\x035\x8c\xb1N\xc30\x14E\x1d\xd7\x03\x13\xed\x8f\x98\xd8\x8e\x1d;[\x19\xd8\x01\xc1\xfc\xf4b;-R\x07\x9a8{66\x18\xd9\xf8\x00\x06\xbe\x82!?\xc1\xc8\xca\xcc\x82DR\xb5w:W\xf7\xe8R\xfa\xf9\xfb\xf7\xf3\xf2\xfa\xf4\xf5\x9d}\xd0\xfc\xc1\xc7:\xb6\x1b\x08\x980\x0f\xb1\xc1~\x97\xf2\x14\xbb\x04\xa7\xc5\xef\xfa.\xc5\x16$lC\xd3AU\xfbF9[\x82t\xa6\x00mC\x01NY\x03\xc6k\xa1\x05Z\x19\x94\xcb\x0fob\x0e\x97\x05\xd7\xbeq6\x08\xc5\x8d\xaej\xae\x95B^\x17(x\x94F\x96\xb6D\x1d\x05\xf2Y\x96\x17\x8f\xd8\xee\xfb\x98\xce\xaf/oo\xee\xaf\xee\xc6\xecy9\x0c\xc3\x9a2\xfa\xbe`o\x0b2\xc1\x98\xb11\x9b\x810B\xe8\x0c+r\x0c\xa3\xf2\xd0\xfd\xa9\xb3\xaa\x9a\x1cz6y\xff\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790503606,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8276787480606260770,\n "refs" : {\n "main" : {\n "snapshot-id" : 8276787480606260770,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n } ],\n "metadata-log" : [ ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv1.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790504648,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8494498371458529486,\n "refs" : {\n "main" : {\n "snapshot-id" : 8494498371458529486,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 1118366645057585943,\n "parent-snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790504083,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 8494498371458529486,\n "parent-snapshot-id" : 1118366645057585943,\n "timestamp-ms" : 1743790504648,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "300",\n "total-files-size" : "2901",\n "total-data-files" : "3",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n }, {\n "timestamp-ms" : 1743790504083,\n "snapshot-id" : 1118366645057585943\n }, {\n "timestamp-ms" : 1743790504648,\n "snapshot-id" : 8494498371458529486\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790503606,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json"\n }, {\n "timestamp-ms" : 1743790504083,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv3.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json'] Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json'] Executing query SELECT * FROM system.clusters on node1 Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N Executing query SELECT * FROM icebergHDFS(hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') on node1 Executing query SELECT * FROM icebergHDFSCluster('cluster_simple', hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') on node1 Executing query SELECT * FROM icebergHDFS(hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') SETTINGS object_storage_cluster='cluster_simple' on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28; CREATE TABLE test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 ENGINE=IcebergHDFS(hdfs, filename = 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') SETTINGS object_storage_cluster = 'cluster_simple' on node1 Executing query SELECT * FROM test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 on node1 Executing query SELECT * FROM remote('node2', icebergHDFSCluster('cluster_simple', hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') ) on node1 Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28` SYNC on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28; CREATE TABLE test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 ENGINE=IcebergHDFS(hdfs, filename = 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') on node1 Command to send: m d o408 e Answer received: !yv Command to send: m d o409 e Answer received: !yv Executing query SELECT * FROM test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 on node1 Executing query SELECT * FROM test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 SETTINGS object_storage_cluster='cluster_simple' on node1 ------------------------------ Captured log call ------------------------------- 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro294 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o294 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro295 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo296 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o296 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro296 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro297 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o295 toDF ro297 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro298 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro299 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o299 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro300 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo301 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o301 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro301 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro302 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o300 toDF ro302 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro303 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o303 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro304 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro305 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o305 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro306 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o306 get e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro307 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro308 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yao309 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o308 applyModifiableSettings ro307 ro309 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro310 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o304 cast ro310 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro311 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o303 withColumn sb ro311 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro312 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro313 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro314 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo315 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o315 add ro314 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro315 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro316 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro316 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro317 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o313 over ro317 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro318 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o298 withColumn srow_index ro318 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro319 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro320 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro321 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo322 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o322 add ro321 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro322 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro323 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro323 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro324 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o320 over ro324 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro325 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o312 withColumn srow_index ro325 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro326 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo327 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o327 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro327 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro328 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o319 join ro326 ro328 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro329 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o329 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro330 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o330 writeTo stest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro331 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o331 tableProperty sformat-version s1 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro332 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o331 using siceberg e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro333 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o331 create e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] INFO : MKDIRS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=MKDIRS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:03 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] INFO : MKDIRS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=MKDIRS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8276787480606260770\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id\x08null\x00\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br\x02\xac\x025\x8c\xbb\r\xc20\x14\x00\x13\xefc\xec\xc4\xcf\xb1\xbd\n\xcd\x93?\xcf\x80\x94\x08)qX\x81\x92\x12\x06\xa2`\tJZj\x1a\x10"\x12\xba\xe2\x8a\x93\xee\xc2\xc4.R\xa0q\x83\xc9\x17/\x12e?\xf7E\x14\x9a\n\xfeK\xec\xe7\xa9\xd0\x88\rnS\x9e\xd0\x85\x98[k:l\xacV\x08&)\xb4\xad\xd1\xa8#H\x90\xde4\xa9\xb5b\xa0\xe2\x97\xa3\xd2\x8e\x9c\x02\xcbM\xce\x86\x83\x03\xc3-X\xe2\x94d\xea\x88BP\x1d\xf0A\xae\xfca\xdc\x7f\xd6\x15\xbb\xbe\xde\xcf\xd3\xf9x\x7f\xd4\x8c\xb1j\xe1V\xff\xf4\x05\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:03 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:03 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7\x02\xa4\x035\x8c\xb1N\xc30\x14E\x1d\xd7\x03\x13\xed\x8f\x98\xd8\x8e\x1d;[\x19\xd8\x01\xc1\xfc\xf4b;-R\x07\x9a8{66\x18\xd9\xf8\x00\x06\xbe\x82!?\xc1\xc8\xca\xcc\x82DR\xb5w:W\xf7\xe8R\xfa\xf9\xfb\xf7\xf3\xf2\xfa\xf4\xf5\x9d}\xd0\xfc\xc1\xc7:\xb6\x1b\x08\x980\x0f\xb1\xc1~\x97\xf2\x14\xbb\x04\xa7\xc5\xef\xfa.\xc5\x16$lC\xd3AU\xfbF9[\x82t\xa6\x00mC\x01NY\x03\xc6k\xa1\x05Z\x19\x94\xcb\x0fob\x0e\x97\x05\xd7\xbeq6\x08\xc5\x8d\xaej\xae\x95B^\x17(x\x94F\x96\xb6D\x1d\x05\xf2Y\x96\x17\x8f\xd8\xee\xfb\x98\xce\xaf/oo\xee\xaf\xee\xc6\xecy9\x0c\xc3\x9a2\xfa\xbe`o\x0b2\xc1\x98\xb11\x9b\x810B\xe8\x0c+r\x0c\xa3\xf2\xd0\xfd\xa9\xb3\xaa\x9a\x1cz6y\xff\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:03 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:03 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790503606,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8276787480606260770,\n "refs" : {\n "main" : {\n "snapshot-id" : 8276787480606260770,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n } ],\n "metadata-log" : [ ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:03 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv1.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:03 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:03 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:03 GMT, Fri, 04 Apr 2025 18:15:03 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:03 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro334 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o334 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro335 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo336 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o336 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro336 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro337 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o335 toDF ro337 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro338 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro339 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o339 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro340 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo341 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o341 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro341 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro342 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o340 toDF ro342 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro343 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o343 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro344 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro345 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o345 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro346 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o346 get e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro347 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro348 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yao349 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o348 applyModifiableSettings ro347 ro349 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro350 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o344 cast ro350 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro351 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o343 withColumn sb ro351 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro352 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro353 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro354 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo355 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o355 add ro354 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro355 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro356 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro356 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro357 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o353 over ro357 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro358 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o338 withColumn srow_index ro358 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro359 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro360 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro361 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo362 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o362 add ro361 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro362 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro363 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro363 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro364 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o360 over ro364 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro365 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o352 withColumn srow_index ro365 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro366 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ylo367 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o367 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro367 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro368 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o359 join ro366 ro368 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro369 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o369 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro370 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o370 writeTo stest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yro371 (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: c o371 append e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o215 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o214 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o191 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o255 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o254 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o231 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o287 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o274 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o281 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o296 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o301 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o309 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o315 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o322 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o327 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o294 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o295 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o297 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o298 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o299 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o300 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o302 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o303 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o304 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o305 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o306 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o308 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o310 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o311 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o312 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o313 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o314 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o316 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o317 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o318 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o319 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o320 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o321 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o323 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o324 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o325 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o326 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o328 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o329 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o332 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o333 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o336 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o341 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o349 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o355 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o334 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o335 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o337 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o338 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o339 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o340 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o342 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o343 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o344 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o345 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o346 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o348 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o350 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o351 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o353 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o354 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o356 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o357 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o358 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o362 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Command to send: m d o367 e (clientserver.py:501, send_command) 2025-04-04 18:15:03 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8276787480606260770\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id\x08null\x00\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br\x02\xac\x025\x8c\xbb\r\xc20\x14\x00\x13\xefc\xec\xc4\xcf\xb1\xbd\n\xcd\x93?\xcf\x80\x94\x08)qX\x81\x92\x12\x06\xa2`\tJZj\x1a\x10"\x12\xba\xe2\x8a\x93\xee\xc2\xc4.R\xa0q\x83\xc9\x17/\x12e?\xf7E\x14\x9a\n\xfeK\xec\xe7\xa9\xd0\x88\rnS\x9e\xd0\x85\x98[k:l\xacV\x08&)\xb4\xad\xd1\xa8#H\x90\xde4\xa9\xb5b\xa0\xe2\x97\xa3\xd2\x8e\x9c\x02\xcbM\xce\x86\x83\x03\xc3-X\xe2\x94d\xea\x88BP\x1d\xf0A\xae\xfca\xdc\x7f\xd6\x15\xbb\xbe\xde\xcf\xd3\xf9x\x7f\xd4\x8c\xb1j\xe1V\xff\xf4\x05\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790504083,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 1118366645057585943,\n "refs" : {\n "main" : {\n "snapshot-id" : 1118366645057585943,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 1118366645057585943,\n "parent-snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790504083,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n }, {\n "timestamp-ms" : 1743790504083,\n "snapshot-id" : 1118366645057585943\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790503606,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&1118366645057585943\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id&8276787480606260770\x00Vm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c\x04\xa0\x03\xb5\xce=JCA\x14\x05\xe0d6\xe2\n&o~\xee\xbc\xb9\xb3\x954\xc3\xfc\xdck\x84\x04\xe1\xbd\x89;\x904\x82\xa5\xa9,\\@\xfa\xf4\x16n\xc2\xd2N\xac,DP\xd4@\x1a\xfbp\x8aS\x1c8|[\xd1]\x14\xca4\x9c\xc7\x9aZ\xea*qZ/[\xd7hl\xf1\xb8\x94\xe5zl4D\x1d\x17\x95\xc7\x18ra\x83\xbe\x8f\x1a\x9d\x8d\xe0\xab\x8dh\xbc\x8b\xae\x80\x02\x95\xbc\xae\x06\xbb\x15\xb5txt^1@0\xd2\x02\xb3\x84\xdeV\x99\xb9h\xa9\xc1\x14\x8d\x065\xda$Wj\x96\xae\x86\xcb\xcf\xf9D\xecn^\xdf\xf7\x0f\xf7\xd7gB\x88\xc9!O\xd3\xbf\xda\x9e\\j]\xa0`\x01\xa5g\xf6\x12\x02x\x89\x80$\xa9\xaa\xda\x13\xe5l{8J\xbf\x7f\xa5\x8f\x1f_o\xb7w\x9b\xe7\x97\xe9?\xea\x0fVm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00N\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83\x02\xa0\x035\x8c;R\xc30\x14EeE\x05\x15\xc9\x0e\xb2\x02a\xebg=uP\xd0\x03\x03\xb5F\xd6\'a&\x05\xb1\xe5\xde\r\x1d\xc3\x12RPR\xa4g\x07^\x0c%\x0b\xc0\xce$\xb7:\xf7\xbe3\x0f\xe3\xe3\xc7\xef\xdf\xcf\xd7\xe1}}\xc4\xe5\xab\x8fMl76\xb8\xec\xca\x10\x93\xebw\xb9\xcc\xb1\xcb\xf6r\xf1\xbb\xbe\xcb\xb1\xb5\xccnC\xea\xaci|\xe2\xa0k\xcb@\t+u\x10\x16\xb8VVyY\xc9\xcai\x168\x94\xa7o\xd5\x1c\xca\x14\xd5\xdcq#\x05P\x01\x82Qi\xa0\xa6\x10\x1aM!\xd5N\xebT\x81\x9a\x96Yf7o\xae\xdd\xf71_?\xdc==\xbe\xdc?\x8f\xc5\xe7r\x18\x86[L\xf0\xf7\x82\x1c\x16h\x82\xb1 c1\x03"\x08\xe1\x19V\xe8\x1c\x82\xd9\xa9\xfbK\'\xc6L\x0e\xbe\x9a\xbc\x7fN\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F570f4492-34ff-463d-bfc1-142c1828183a-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'2', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7\x02\xa4\x035\x8c\xb1N\xc30\x14E\x1d\xd7\x03\x13\xed\x8f\x98\xd8\x8e\x1d;[\x19\xd8\x01\xc1\xfc\xf4b;-R\x07\x9a8{66\x18\xd9\xf8\x00\x06\xbe\x82!?\xc1\xc8\xca\xcc\x82DR\xb5w:W\xf7\xe8R\xfa\xf9\xfb\xf7\xf3\xf2\xfa\xf4\xf5\x9d}\xd0\xfc\xc1\xc7:\xb6\x1b\x08\x980\x0f\xb1\xc1~\x97\xf2\x14\xbb\x04\xa7\xc5\xef\xfa.\xc5\x16$lC\xd3AU\xfbF9[\x82t\xa6\x00mC\x01NY\x03\xc6k\xa1\x05Z\x19\x94\xcb\x0fob\x0e\x97\x05\xd7\xbeq6\x08\xc5\x8d\xaej\xae\x95B^\x17(x\x94F\x96\xb6D\x1d\x05\xf2Y\x96\x17\x8f\xd8\xee\xfb\x98\xce\xaf/oo\xee\xaf\xee\xc6\xecy9\x0c\xc3\x9a2\xfa\xbe`o\x0b2\xc1\x98\xb11\x9b\x810B\xe8\x0c+r\x0c\xa3\xf2\xd0\xfd\xa9\xb3\xaa\x9a\x1cz6y\xff\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790503606,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8276787480606260770,\n "refs" : {\n "main" : {\n "snapshot-id" : 8276787480606260770,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n } ],\n "metadata-log" : [ ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv1.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro372 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o372 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro373 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ylo374 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o374 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro374 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro375 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o373 toDF ro375 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro376 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro377 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o377 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro378 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ylo379 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o379 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro379 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro380 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o378 toDF ro380 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro381 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o381 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro382 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro383 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o383 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro384 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o384 get e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro385 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro386 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yao387 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o386 applyModifiableSettings ro385 ro387 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro388 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o382 cast ro388 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro389 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o381 withColumn sb ro389 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro390 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro391 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro392 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ylo393 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o393 add ro392 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro393 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro394 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro394 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro395 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o391 over ro395 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro396 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o376 withColumn srow_index ro396 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro397 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro398 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro399 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ylo400 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o400 add ro399 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro400 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro401 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro401 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro402 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o398 over ro402 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro403 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o390 withColumn srow_index ro403 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro404 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ylo405 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o405 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro405 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro406 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o397 join ro404 ro406 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro407 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o407 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro408 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o408 writeTo stest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yro409 (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: c o409 append e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fdata%2F00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8276787480606260770\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id\x08null\x00\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br\x02\xac\x025\x8c\xbb\r\xc20\x14\x00\x13\xefc\xec\xc4\xcf\xb1\xbd\n\xcd\x93?\xcf\x80\x94\x08)qX\x81\x92\x12\x06\xa2`\tJZj\x1a\x10"\x12\xba\xe2\x8a\x93\xee\xc2\xc4.R\xa0q\x83\xc9\x17/\x12e?\xf7E\x14\x9a\n\xfeK\xec\xe7\xa9\xd0\x88\rnS\x9e\xd0\x85\x98[k:l\xacV\x08&)\xb4\xad\xd1\xa8#H\x90\xde4\xa9\xb5b\xa0\xe2\x97\xa3\xd2\x8e\x9c\x02\xcbM\xce\x86\x83\x03\xc3-X\xe2\x94d\xea\x88BP\x1d\xf0A\xae\xfca\xdc\x7f\xd6\x15\xbb\xbe\xde\xcf\xd3\xf9x\x7f\xd4\x8c\xb1j\xe1V\xff\xf4\x05\xa0C\x02\xe4a\xaf\xab\x01I\x89\x17G\xf3\xb7\x9br', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790504083,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 1118366645057585943,\n "refs" : {\n "main" : {\n "snapshot-id" : 1118366645057585943,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 1118366645057585943,\n "parent-snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790504083,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n }, {\n "timestamp-ms" : 1743790504083,\n "snapshot-id" : 1118366645057585943\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790503606,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&1118366645057585943\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id&8276787480606260770\x00Vm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c\x04\xa0\x03\xb5\xce=JCA\x14\x05\xe0d6\xe2\n&o~\xee\xbc\xb9\xb3\x954\xc3\xfc\xdck\x84\x04\xe1\xbd\x89;\x904\x82\xa5\xa9,\\@\xfa\xf4\x16n\xc2\xd2N\xac,DP\xd4@\x1a\xfbp\x8aS\x1c8|[\xd1]\x14\xca4\x9c\xc7\x9aZ\xea*qZ/[\xd7hl\xf1\xb8\x94\xe5zl4D\x1d\x17\x95\xc7\x18ra\x83\xbe\x8f\x1a\x9d\x8d\xe0\xab\x8dh\xbc\x8b\xae\x80\x02\x95\xbc\xae\x06\xbb\x15\xb5txt^1@0\xd2\x02\xb3\x84\xdeV\x99\xb9h\xa9\xc1\x14\x8d\x065\xda$Wj\x96\xae\x86\xcb\xcf\xf9D\xecn^\xdf\xf7\x0f\xf7\xd7gB\x88\xc9!O\xd3\xbf\xda\x9e\\j]\xa0`\x01\xa5g\xf6\x12\x02x\x89\x80$\xa9\xaa\xda\x13\xe5l{8J\xbf\x7f\xa5\x8f\x1f_o\xb7w\x9b\xe7\x97\xe9?\xea\x0fVm\x1eI\xe9~m\xf5#\xf2\xb6\x18h\x03\x8f\x9c', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0c\x16avro.schema\xfa\x1b{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"added_snapshot_id","type":["null","long"],"doc":"Snapshot ID that added the manifest","default":null,"field-id":503},{"name":"added_data_files_count","type":["null","int"],"doc":"Added entry count","default":null,"field-id":504},{"name":"existing_data_files_count","type":["null","int"],"doc":"Existing entry count","default":null,"field-id":505},{"name":"deleted_data_files_count","type":["null","int"],"doc":"Deleted entry count","default":null,"field-id":506},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507},{"name":"added_rows_count","type":["null","long"],"doc":"Added rows count","default":null,"field-id":512},{"name":"existing_rows_count","type":["null","long"],"doc":"Existing rows count","default":null,"field-id":513},{"name":"deleted_rows_count","type":["null","long"],"doc":"Deleted rows count","default":null,"field-id":514}]}\x14avro.codec\x0edeflate\x16snapshot-id&8494498371458529486\x1cformat-version\x021\x1ciceberg.schema\xb4\x1a{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":503,"name":"added_snapshot_id","required":false,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":false,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":false,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":false,"type":"int","doc":"Deleted entry count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"},{"id":512,"name":"added_rows_count","required":false,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":false,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":false,"type":"long","doc":"Deleted rows count"}]}$parent-snapshot-id&1118366645057585943\x00 5\xf3\xf21\xbc"\xf9mP\xc7OtuDd\x06\xf0\x03\xbd\xd0;J\x04A\x10\x80\xe1\xdd\xbe\x88\'h\xa7\x1f5\xfd\xb8\x8aI\xd3\x8f*\x15v\x11fz\xbd\x81\x98\x08\x86n \x06\x1e\xc0\xdcT\x0c\x0c\xcd\xc5D1Q12\x10a\x17\xc7\x85\x051W*\xa8\xa0\xa0\xf8\xf8\xe7\xac\xd9\xcd\x98\xb0\xdb\x0e%\xd6\xd8\x14\xa48\x9b\xd4\xa6b_\xc3\xfa\x92\'\xb3\xbeb\x17d\xd8)\xd4\x07\x9f2)gM\x90\xae\xd5\x01l\xd1\xc1)\xdb\x866\x83\x00\x11\xad,\xca5S\xacq\xf5\x91\x84\xb2>y\xc5\x01\x8a\xe7`\x8c\xe0\x11\t8\x81p\xba$cS\xd2|*6\xe3~\xb7\xb7\xdc\x1a\xb1\xd3\xc7\xbb\xe5\xf3\xed\xd5\xc3\xcb\x9816Z\xcd\xcd\xf8{\xcd\xff\x9c\xdaZA\x00\x03U\x03\xd1@\xd5\x85\'\xca\x92KPY:\xe5\xa4\xd3qM\xfd\x1c\xa8\x17G\xaf\xef\x97\xe7g\x07\x1b\xff.\xd5\xadG\xaf\xc1qKd9x\xb0\xdc\x81C\x8eE\x14\x838$5\xf03\xea\xf5\xc7\xe2\xed\xf8\xe4\xf0\xfe\xe9w\xd4/ 5\xf3\xf21\xbc"\xf9mP\xc7OtuDd', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fsnap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x03H\xb2^\xec\x11_7\x06\x17\xd1\x80\x1bZ\xf98\x02\xa4\x035\x8c=R\xc30\x14\x84eE\x05\x15p\x11\xc5\x92m\xfduP\xd0\x03\x03\xf5\x1b\xe5I\x02fR\x80-\xf7>AN\xc0p\x00\n\x0eA\xe5\x92\x9e\x0e\xba\xe4\x0e\x0cv&\xd9\xea\xdb\xd9o\x96\xd2\xd7\xdf\xef\xbf\xed\xd7\xe7\xcf\xae\xf8\xa0\xe5\x13\xc6Ul\x1f \xf8\xec\xcb\x10\x93\xef\xd7\xb9\xcc\xb1\xcbp\\p\xddw9\xb6 \xe11\xa4\x0e\xdc\nSe\x8d\x06iU\r\x8d\t5\xd8\xca(P\xd8\x88Fx#Ce\xcb\xfd\x9b\x98\xc3\xa5\xe1IK-\x9c@\xaek\x8b\xbcI\xc9s\xa7\x8d\xe6:h\xa5\xb1R\x16E\xcdgY.\x9f}\xfb\xd2\xc7|z}y{s\x7fu7\x16\x9b\xb3a\x18.(\xa3\xef\x0b\xf6\xb6 \x13\x8c\x05\x1b\x8b\x19\x08#\x84\xcepN\x0eaT\xee;\x1e;snr\xe8\xc9\xe4\xfd\x03\x03H\xb2^\xec\x11_7\x06\x17\xd1\x80\x1bZ\xf98', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Ff0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00N\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83\x02\xa0\x035\x8c;R\xc30\x14EeE\x05\x15\xc9\x0e\xb2\x02a\xebg=uP\xd0\x03\x03\xb5F\xd6\'a&\x05\xb1\xe5\xde\r\x1d\xc3\x12RPR\xa4g\x07^\x0c%\x0b\xc0\xce$\xb7:\xf7\xbe3\x0f\xe3\xe3\xc7\xef\xdf\xcf\xd7\xe1}}\xc4\xe5\xab\x8fMl76\xb8\xec\xca\x10\x93\xebw\xb9\xcc\xb1\xcb\xf6r\xf1\xbb\xbe\xcb\xb1\xb5\xccnC\xea\xaci|\xe2\xa0k\xcb@\t+u\x10\x16\xb8VVyY\xc9\xcai\x168\x94\xa7o\xd5\x1c\xca\x14\xd5\xdcq#\x05P\x01\x82Qi\xa0\xa6\x10\x1aM!\xd5N\xebT\x81\x9a\x96Yf7o\xae\xdd\xf71_?\xdc==\xbe\xdc?\x8f\xc5\xe7r\x18\x86[L\xf0\xf7\x82\x1c\x16h\x82\xb1 c1\x03"\x08\xe1\x19V\xe8\x1c\x82\xd9\xa9\xfbK\'\xc6L\x0e\xbe\x9a\xbc\x7fN\x9b\x96R\xa0\x1dfv\x80\x8d\xbdgE\xa4)\x83', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F570f4492-34ff-463d-bfc1-142c1828183a-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:04 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'3', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o363 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o364 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o365 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o366 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o368 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o369 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o370 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o371 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o374 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o379 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o387 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o393 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o400 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o405 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o372 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o373 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o375 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o376 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o377 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o378 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o380 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o381 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o382 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o383 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o384 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o386 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o388 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o389 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o390 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o391 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o392 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o394 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o395 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o396 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o398 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o399 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o401 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o402 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o403 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Command to send: m d o406 e (clientserver.py:501, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:04 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:04 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:04 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:04 GMT, Fri, 04 Apr 2025 18:15:04 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:04 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:04 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:05 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:05 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x96.{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"block_size_in_bytes","type":"long","field-id":105},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x021"partition-spec-id\x020\x1ciceberg.schema\xea${"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]}},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":105,"name":"block_size_in_bytes","required":true,"type":"long"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x00\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7\x02\xa4\x035\x8c\xb1N\xc30\x14E\x1d\xd7\x03\x13\xed\x8f\x98\xd8\x8e\x1d;[\x19\xd8\x01\xc1\xfc\xf4b;-R\x07\x9a8{66\x18\xd9\xf8\x00\x06\xbe\x82!?\xc1\xc8\xca\xcc\x82DR\xb5w:W\xf7\xe8R\xfa\xf9\xfb\xf7\xf3\xf2\xfa\xf4\xf5\x9d}\xd0\xfc\xc1\xc7:\xb6\x1b\x08\x980\x0f\xb1\xc1~\x97\xf2\x14\xbb\x04\xa7\xc5\xef\xfa.\xc5\x16$lC\xd3AU\xfbF9[\x82t\xa6\x00mC\x01NY\x03\xc6k\xa1\x05Z\x19\x94\xcb\x0fob\x0e\x97\x05\xd7\xbeq6\x08\xc5\x8d\xaej\xae\x95B^\x17(x\x94F\x96\xb6D\x1d\x05\xf2Y\x96\x17\x8f\xd8\xee\xfb\x98\xce\xaf/oo\xee\xaf\xee\xc6\xecy9\x0c\xc3\x9a2\xfa\xbe`o\x0b2\xc1\x98\xb11\x9b\x810B\xe8\x0c+r\x0c\xa3\xf2\xd0\xfd\xa9\xb3\xaa\x9a\x1cz6y\xff\x14\x89&/\xcb\x13\xaf\xce\xae1c\x15r>\x91\xf7', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2F359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:05 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:05 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:05 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790503606,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8276787480606260770,\n "refs" : {\n "main" : {\n "snapshot-id" : 8276787480606260770,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n } ],\n "metadata-log" : [ ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv1.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:05 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:05 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:05 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 1,\n "table-uuid" : "762d77fc-31c8-4b8f-a430-fe8ce8ac91f5",\n "location" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28",\n "last-updated-ms" : 1743790504648,\n "last-column-id" : 2,\n "schema" : {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n },\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "partition-spec" : [ ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 8494498371458529486,\n "refs" : {\n "main" : {\n "snapshot-id" : 8494498371458529486,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790503606,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 1118366645057585943,\n "parent-snapshot-id" : 8276787480606260770,\n "timestamp-ms" : 1743790504083,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro",\n "schema-id" : 0\n }, {\n "snapshot-id" : 8494498371458529486,\n "parent-snapshot-id" : 1118366645057585943,\n "timestamp-ms" : 1743790504648,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "300",\n "total-files-size" : "2901",\n "total-data-files" : "3",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790503606,\n "snapshot-id" : 8276787480606260770\n }, {\n "timestamp-ms" : 1743790504083,\n "snapshot-id" : 1118366645057585943\n }, {\n "timestamp-ms" : 1743790504648,\n "snapshot-id" : 8494498371458529486\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790503606,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json"\n }, {\n "timestamp-ms" : 1743790504083,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:05 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28%2Fmetadata%2Fv3.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:05 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:05 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:05 GMT, Fri, 04 Apr 2025 18:15:05 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:05 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:05 [ 670 ] INFO : Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-13-4cf87d02-549b-422a-b3a0-e151676a4e0a-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-15-72a29438-3831-4986-8db7-8f6a77f08586-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/data/00000-17-f616090c-638c-4ffa-9676-6d656c258c03-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8276787480606260770-1-359e9348-7ff7-4947-848e-ed0d6eebb364.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-1118366645057585943-1-570f4492-34ff-463d-bfc1-142c1828183a.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/snap-8494498371458529486-1-f0279b92-44d9-4660-aef4-f4083db67bb3.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/f0279b92-44d9-4660-aef4-f4083db67bb3-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/570f4492-34ff-463d-bfc1-142c1828183a-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/359e9348-7ff7-4947-848e-ed0d6eebb364-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/metadata/v3.metadata.json'] (test.py:653, test_cluster_table_function) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query SELECT * FROM system.clusters on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] INFO : Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N (test.py:657, test_cluster_table_function) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query SELECT * FROM icebergHDFS(hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query SELECT * FROM icebergHDFSCluster('cluster_simple', hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query SELECT * FROM icebergHDFS(hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28; CREATE TABLE test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 ENGINE=IcebergHDFS(hdfs, filename = 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') SETTINGS object_storage_cluster = 'cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query SELECT * FROM remote('node2', icebergHDFSCluster('cluster_simple', hdfs, filename= 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') ) on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28` SYNC on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28; CREATE TABLE test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 ENGINE=IcebergHDFS(hdfs, filename = 'iceberg_data/default/test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28/', format=Parquet, url = 'hdfs://hdfs1:9000/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:05 [ 670 ] DEBUG : Command to send: m d o408 e (clientserver.py:501, send_command) 2025-04-04 18:15:05 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:05 [ 670 ] DEBUG : Command to send: m d o409 e (clientserver.py:501, send_command) 2025-04-04 18:15:05 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 on node1 (cluster.py:3677, query) 2025-04-04 18:15:06 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_hdfs_9bcf2876_1853_47d3_8275_5c4040a71d28 SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) _____________________ test_cluster_table_function[hdfs-2] ______________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = format_version = '2', storage_type = 'hdfs' @pytest.mark.parametrize("format_version", ["1", "2"]) @pytest.mark.parametrize("storage_type", ["s3", "azure", "hdfs"]) def test_cluster_table_function(started_cluster, format_version, storage_type): if is_arm() and storage_type == "hdfs": pytest.skip("Disabled test IcebergHDFS for aarch64") instance = started_cluster.instances["node1"] spark = started_cluster.spark_session TABLE_NAME = ( "test_iceberg_cluster_" + format_version + "_" + storage_type + "_" + get_uuid_str() ) def add_df(mode): write_iceberg_from_df( spark, generate_data(spark, 0, 100), TABLE_NAME, mode=mode, format_version=format_version, ) files = default_upload_directory( started_cluster, storage_type, f"/iceberg_data/default/{TABLE_NAME}/", f"/iceberg_data/default/{TABLE_NAME}/", ) logging.info(f"Adding another dataframe. result files: {files}") return files files = add_df(mode="overwrite") for i in range(1, len(started_cluster.instances)): files = add_df(mode="append") logging.info(f"Setup complete. files: {files}") assert len(files) == 5 + 4 * (len(started_cluster.instances) - 1) clusters = instance.query(f"SELECT * FROM system.clusters") logging.info(f"Clusters setup: {clusters}") # Regular Query only node1 table_function_expr = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True ) select_regular = ( instance.query(f"SELECT * FROM {table_function_expr}").strip().split() ) # Cluster Query with node1 as coordinator table_function_expr_cluster = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True, run_on_cluster=True, ) query_id_cluster = str(uuid.uuid4()) select_cluster = ( instance.query( f"SELECT * FROM {table_function_expr_cluster}", query_id=query_id_cluster ) .strip() .split() ) # Cluster Query with node1 as coordinator with alternative syntax query_id_cluster_alt_syntax = str(uuid.uuid4()) select_cluster_alt_syntax = ( instance.query( f""" SELECT * FROM {table_function_expr} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_cluster_alt_syntax, ) .strip() .split() ) create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster, object_storage_cluster='cluster_simple') query_id_cluster_table_engine = str(uuid.uuid4()) select_cluster_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_cluster_table_engine, ) .strip() .split() ) select_remote_cluster = ( instance.query(f"SELECT * FROM remote('node2',{table_function_expr_cluster})") .strip() .split() ) instance.query(f"DROP TABLE IF EXISTS `{TABLE_NAME}` SYNC") create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster) query_id_pure_table_engine = str(uuid.uuid4()) select_pure_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_pure_table_engine, ) .strip() .split() ) query_id_pure_table_engine_cluster = str(uuid.uuid4()) select_pure_table_engine_cluster = ( instance.query( f""" SELECT * FROM {TABLE_NAME} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_pure_table_engine_cluster, ) .strip() .split() ) # Simple size check assert len(select_regular) == 600 assert len(select_cluster) == 600 assert len(select_cluster_alt_syntax) == 600 > assert len(select_cluster_table_engine) == 600 E AssertionError: assert 1800 == 600 E + where 1800 = len(['0', '1', '1', '2', '2', '3', ...]) test_storage_iceberg/test.py:747: AssertionError ----------------------------- Captured stdout call ----------------------------- 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:06 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} 25/04/04 18:15:07 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:07 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:07 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:07 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:07 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:07 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} ----------------------------- Captured stderr call ----------------------------- Command to send: c o50 sc e Answer received: !yro410 Command to send: c o410 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro411 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo412 Command to send: c o412 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro412 e Answer received: !yro413 Command to send: c o411 toDF ro413 e Answer received: !yro414 Command to send: c o50 sc e Answer received: !yro415 Command to send: c o415 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro416 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo417 Command to send: c o417 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro417 e Answer received: !yro418 Command to send: c o416 toDF ro418 e Answer received: !yro419 Command to send: c o419 apply sb e Answer received: !yro420 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro421 Command to send: c o421 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro422 Command to send: c o422 get e Answer received: !yro423 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro424 Command to send: i java.util.HashMap e Answer received: !yao425 Command to send: c o424 applyModifiableSettings ro423 ro425 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro426 Command to send: c o420 cast ro426 e Answer received: !yro427 Command to send: c o419 withColumn sb ro427 e Answer received: !yro428 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro429 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro430 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo431 Command to send: c o431 add ro430 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro431 e Answer received: !yro432 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro432 e Answer received: !yro433 Command to send: c o429 over ro433 e Answer received: !yro434 Command to send: c o414 withColumn srow_index ro434 e Answer received: !yro435 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro436 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro437 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo438 Command to send: c o438 add ro437 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro438 e Answer received: !yro439 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro439 e Answer received: !yro440 Command to send: c o436 over ro440 e Answer received: !yro441 Command to send: c o428 withColumn srow_index ro441 e Answer received: !yro442 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo443 Command to send: c o443 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro443 e Answer received: !yro444 Command to send: c o435 join ro442 ro444 sinner e Answer received: !yro445 Command to send: c o445 drop srow_index e Answer received: !yro446 Command to send: c o446 writeTo stest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79 e Answer received: !yro447 Command to send: c o447 tableProperty sformat-version s2 e Answer received: !yro448 Command to send: c o447 using siceberg e Answer received: !yro449 Command to send: c o447 create e Answer received: !yv GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None MKDIRS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=MKDIRS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None MKDIRS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=MKDIRS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&5114138684830281544\x1cformat-version\x022\x1esequence-number\x021\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id\x08null\x00F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96\x02\xa8\x025\xccA\n\xc20\x10@\xd14\xf7\x89\x19b\x9a4\xa7\x19\xa6\x99\xa9\x15Z\x846\xf5\x00\xee\\\x08\xae\xc4\xbd\x87p\xe5\xde\xc3x\x01w\x8a\xe0\xf6?\xf8Wm\xb7YZ\x996\xc8T\xc8\xb2t\xb4\x0c\xc5\x16\x99\x0b\xfe%\x0f\xcb\\dB\x87=w3\x86\x1a\x9a \x9c16-\xa0\x07\xc7H)\x05\xac\xd9\xe7X\x07I]Lv\x94B\xbf#q\xa0\x04\x90\x8coC4\x9e8\x9a\x04\x01\x0c;\xdf0\x05q\xeb$f\x84\x15\xed\xa7\xdd\xabWJ\xeb\xf3\xe3v\xb9\x1f\x8e\xefS\xa5\x95zV\xdf\xa4>F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00K\xc8?\xd9\xdd\x91\xffC\xbe\x987[\xcd\xc4\xf4X\x02\x9c\x035\x8c\xbbR\xc30\x10EeEE\xaa\xc0\x8f\x08K\x89eyK\nz\xc8@\xbd\xa3\xc7\x1a\x98I\x01\xb6\xfc\x03t\x140T\x944\xe9R\xa4ME\xef_\xa2\x8b\x95!\xb7:\xf7\xee\x99\xe5\xfc\xebw\xfb}x{\xff\xfb(\x18c{^>\x07\xf2\xd4=bt\xc9\x95\x91Z7lR\x99\xa8Ox\xbe\x84\xcd\xd0\'\xeap\x89O\xb1\xed\xb16\xaa\xa9)\x06\xb4\x8dWX\xa9eD\x07P\xa3\x89U\xb0\xa6&h-\x94\xa7o*Gj\x90\xae\xb1+\xa8\xbc\x97\xe4\xac\x97Uc\x8c\xf4+\xed\xa5\x996\xe5\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 Command to send: m d o331 e Answer received: !yv Command to send: m d o330 e Answer received: !yv Command to send: m d o307 e Answer received: !yv Command to send: m d o352 e Answer received: !yv Command to send: m d o359 e Answer received: !yv Command to send: m d o361 e Answer received: !yv Command to send: m d o360 e Answer received: !yv Command to send: m d o347 e Answer received: !yv Command to send: m d o397 e Answer received: !yv Command to send: m d o404 e Answer received: !yv Command to send: m d o407 e Answer received: !yv Command to send: m d o412 e Answer received: !yv Command to send: m d o417 e Answer received: !yv Command to send: m d o425 e Answer received: !yv Command to send: m d o431 e Answer received: !yv Command to send: m d o438 e Answer received: !yv Command to send: m d o443 e Answer received: !yv Command to send: m d o410 e Answer received: !yv Command to send: m d o411 e Answer received: !yv Command to send: m d o413 e Answer received: !yv Command to send: m d o414 e Answer received: !yv Command to send: m d o415 e Answer received: !yv Command to send: m d o416 e Answer received: !yv Command to send: m d o418 e Answer received: !yv Command to send: m d o419 e Answer received: !yv Command to send: m d o420 e Answer received: !yv Command to send: m d o421 e Answer received: !yv Command to send: m d o422 e Answer received: !yv Command to send: m d o424 e Answer received: !yv Command to send: m d o426 e Answer received: !yv Command to send: m d o427 e Answer received: !yv Command to send: m d o428 e Answer received: !yv Command to send: m d o429 e Answer received: !yv Command to send: m d o430 e Answer received: !yv Command to send: m d o432 e Answer received: !yv Command to send: m d o433 e Answer received: !yv Command to send: m d o434 e Answer received: !yv Command to send: m d o435 e Answer received: !yv Command to send: m d o436 e Answer received: !yv Command to send: m d o437 e Answer received: !yv Command to send: m d o439 e Answer received: !yv Command to send: m d o440 e Answer received: !yv Command to send: m d o441 e Answer received: !yv Command to send: m d o442 e Answer received: !yv Command to send: m d o444 e Answer received: !yv Command to send: m d o445 e Answer received: !yv Command to send: m d o448 e Answer received: !yv Command to send: m d o449 e Answer received: !yv Command to send: m d o452 e Answer received: !yv Command to send: m d o457 e Answer received: !yv Command to send: m d o465 e Answer received: !yv Command to send: m d o471 e Answer received: !yv Command to send: m d o478 e Answer received: !yv Command to send: m d o450 e Answer received: !yv Command to send: m d o451 e Answer received: !yv Command to send: m d o453 e Answer received: !yv Command to send: m d o454 e Answer received: !yv Command to send: m d o455 e Answer received: !yv Command to send: m d o456 e Answer received: !yv Command to send: m d o458 e Answer received: !yv Command to send: m d o459 e Answer received: !yv Command to send: m d o460 e Answer received: !yv Command to send: m d o461 e Answer received: !yv Command to send: m d o462 e Answer received: !yv Command to send: m d o464 e Answer received: !yv Command to send: m d o466 e Answer received: !yv Command to send: m d o467 e Answer received: !yv Command to send: m d o468 e Answer received: !yv Command to send: m d o469 e Answer received: !yv Command to send: m d o470 e Answer received: !yv Command to send: m d o472 e http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet&user.name=root HTTP/1.1" 201 0 Answer received: !yv response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} Command to send: m d o473 e b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} Answer received: !yv Command to send: m d o474 e Answer received: !yv Command to send: m d o476 e Answer received: !yv Command to send: m d o477 e GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 Answer received: !yv Command to send: m d o479 e Answer received: !yv Command to send: m d o480 e Answer received: !yv Command to send: m d o481 e Answer received: !yv Command to send: m d o483 e Answer received: !yv http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00N\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD\x02\x9e\x035\x8c\xadR\x031\x14\x85\xb3i\x04\n\xfa"\xe9&\xfb\x93Md\x05\x1e\x18\xd0w\x92\xdc\xa4\xedLE\xbb\x9b\x15}\x85\n\x1e\x01\x87A \xd1\xc8\x1d\x0c\x9e\x07\xe09h:\xf4\xa8\xef\xcc\xf9\xe6P\xfay\xfc\xf9\xfe\xf8\xfdz=\x10B\xdei\xb9\xf1\xc1\x85~\x05h\x93-1D;nS\x99\xc2\x90\xe0\xb2\xf8\xed8\xa4\xd0C\x05k\x8c\x03\xa8Vh\x15\xd0C\xa7\x9d\x80FT\x08\xd6\x18\x05-6\xbekU0\xb13\xe5\xf9M\xe4\xf0J\xf2\xdaG\xeb*\xa7y\xab\x9c\xe5\x8dT\xc8\x8d\xd7\x96G\xa3j\xe1\x11e\xa7k\x9ee\xb9\xd8\xd9~?\x86t}\xb7|\xb8\x7f\xba}\x9c\x8a\xe7\x1b\xca\xe8\xdb\x8c\xbd\xcc\xc8\t\xa6\x82ME\x06\xc2\x08\xa1\x19\xe6\xe4?\x8c\xcas\xf7\x97\xce\x8c99\xf4*\x8b\x7fN\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2F91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 2,\n "table-uuid" : "c91f0788-0ba8-4ca9-9925-2ab9a7a11eb7",\n "location" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79",\n "last-sequence-number" : 2,\n "last-updated-ms" : 1743790506942,\n "last-column-id" : 2,\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 4370063500831834784,\n "refs" : {\n "main" : {\n "snapshot-id" : 4370063500831834784,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "sequence-number" : 1,\n "snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506546,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro",\n "schema-id" : 0\n }, {\n "sequence-number" : 2,\n "snapshot-id" : 4370063500831834784,\n "parent-snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506942,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790506546,\n "snapshot-id" : 5114138684830281544\n }, {\n "timestamp-ms" : 1743790506942,\n "snapshot-id" : 4370063500831834784\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790506546,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&5114138684830281544\x1cformat-version\x022\x1esequence-number\x021\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id\x08null\x00F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96\x02\xa8\x025\xccA\n\xc20\x10@\xd14\xf7\x89\x19b\x9a4\xa7\x19\xa6\x99\xa9\x15Z\x846\xf5\x00\xee\\\x08\xae\xc4\xbd\x87p\xe5\xde\xc3x\x01w\x8a\xe0\xf6?\xf8Wm\xb7YZ\x996\xc8T\xc8\xb2t\xb4\x0c\xc5\x16\x99\x0b\xfe%\x0f\xcb\\dB\x87=w3\x86\x1a\x9a \x9c16-\xa0\x07\xc7H)\x05\xac\xd9\xe7X\x07I]Lv\x94B\xbf#q\xa0\x04\x90\x8coC4\x9e8\x9a\x04\x01\x0c;\xdf0\x05q\xeb$f\x84\x15\xed\xa7\xdd\xabWJ\xeb\xf3\xe3v\xb9\x1f\x8e\xefS\xa5\x95zV\xdf\xa4>F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&4370063500831834784\x1cformat-version\x022\x1esequence-number\x022\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id&5114138684830281544\x00)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G\x04\x98\x03\xb5\xce\xb1JC1\x14\x80\xe14\xf4u\xe2M\xd3\xe4\xe4\x9e\xa7\t\'9\'Vh\x11\xeeM\x05W7\x0b\x85N\xd2]|\x05\x9d\x04Gqq\xf7\x01:\x15w7E\xa8o\xe0\xfc\xc3\xcf\xb7\xd7\xddE\x91,\xc3ybj\xd4\xb1TZ/[\xd7dl\xe9T\xcar=6\x19\x92K\x0b\xaec\x82`{\x10.)\xf6\xd9&o\x1d\'B\x84\x14\xd8\x97\x18@\xb0F\xecV\xd2\xe8\xf7\x88\xb3\x1c1\xf8b\x18|6\x1eC0\xb9:1\x01\xca\x9c*VO\xd6\x99\x95=\xa3\xab\xe1\xf2s\xa1\xd4t\xfa\xb2\xf9x\x7f<\xbc=\\k\xa5^\'Ji\xb5\xffw%1\x10Z\x8b\xc6g\x88\xc6\x13G\x83\x16\xaca\xe7{&\x107G9)\x8f?J\xadw\xcf\xf7wO7\xb7_\xdb\xc9\x1f\xf3\x1b)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00K\xc8?\xd9\xdd\x91\xffC\xbe\x987[\xcd\xc4\xf4X\x02\x9c\x035\x8c\xbbR\xc30\x10EeEE\xaa\xc0\x8f\x08K\x89eyK\nz\xc8@\xbd\xa3\xc7\x1a\x98I\x01\xb6\xfc\x03t\x140T\x944\xe9R\xa4ME\xef_\xa2\x8b\x95!\xb7:\xf7\xee\x99\xe5\xfc\xebw\xfb}x{\xff\xfb(\x18c{^>\x07\xf2\xd4=bt\xc9\x95\x91Z7lR\x99\xa8Ox\xbe\x84\xcd\xd0\'\xeap\x89O\xb1\xed\xb16\xaa\xa9)\x06\xb4\x8dWX\xa9eD\x07P\xa3\x89U\xb0\xa6&h-\x94\xa7o*Gj\x90\xae\xb1+\xa8\xbc\x97\xe4\xac\x97Uc\x8c\xf4+\xed\xa5\x996\xe5\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00N\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD\x02\x9e\x035\x8c\xadR\x031\x14\x85\xb3i\x04\n\xfa"\xe9&\xfb\x93Md\x05\x1e\x18\xd0w\x92\xdc\xa4\xedLE\xbb\x9b\x15}\x85\n\x1e\x01\x87A \xd1\xc8\x1d\x0c\x9e\x07\xe09h:\xf4\xa8\xef\xcc\xf9\xe6P\xfay\xfc\xf9\xfe\xf8\xfdz=\x10B\xdei\xb9\xf1\xc1\x85~\x05h\x93-1D;nS\x99\xc2\x90\xe0\xb2\xf8\xed8\xa4\xd0C\x05k\x8c\x03\xa8Vh\x15\xd0C\xa7\x9d\x80FT\x08\xd6\x18\x05-6\xbekU0\xb13\xe5\xf9M\xe4\xf0J\xf2\xdaG\xeb*\xa7y\xab\x9c\xe5\x8dT\xc8\x8d\xd7\x96G\xa3j\xe1\x11e\xa7k\x9ee\xb9\xd8\xd9~?\x86t}\xb7|\xb8\x7f\xba}\x9c\x8a\xe7\x1b\xca\xe8\xdb\x8c\xbd\xcc\xc8\t\xa6\x82ME\x06\xc2\x08\xa1\x19\xe6\xe4?\x8c\xcas\xf7\x97\xce\x8c99\xf4*\x8b\x7fN\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2F91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 2,\n "table-uuid" : "c91f0788-0ba8-4ca9-9925-2ab9a7a11eb7",\n "location" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79",\n "last-sequence-number" : 2,\n "last-updated-ms" : 1743790506942,\n "last-column-id" : 2,\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 4370063500831834784,\n "refs" : {\n "main" : {\n "snapshot-id" : 4370063500831834784,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "sequence-number" : 1,\n "snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506546,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro",\n "schema-id" : 0\n }, {\n "sequence-number" : 2,\n "snapshot-id" : 4370063500831834784,\n "parent-snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506942,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790506546,\n "snapshot-id" : 5114138684830281544\n }, {\n "timestamp-ms" : 1743790506942,\n "snapshot-id" : 4370063500831834784\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790506546,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&5114138684830281544\x1cformat-version\x022\x1esequence-number\x021\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id\x08null\x00F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96\x02\xa8\x025\xccA\n\xc20\x10@\xd14\xf7\x89\x19b\x9a4\xa7\x19\xa6\x99\xa9\x15Z\x846\xf5\x00\xee\\\x08\xae\xc4\xbd\x87p\xe5\xde\xc3x\x01w\x8a\xe0\xf6?\xf8Wm\xb7YZ\x996\xc8T\xc8\xb2t\xb4\x0c\xc5\x16\x99\x0b\xfe%\x0f\xcb\\dB\x87=w3\x86\x1a\x9a \x9c16-\xa0\x07\xc7H)\x05\xac\xd9\xe7X\x07I]Lv\x94B\xbf#q\xa0\x04\x90\x8coC4\x9e8\x9a\x04\x01\x0c;\xdf0\x05q\xeb$f\x84\x15\xed\xa7\xdd\xabWJ\xeb\xf3\xe3v\xb9\x1f\x8e\xefS\xa5\x95zV\xdf\xa4>F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&4370063500831834784\x1cformat-version\x022\x1esequence-number\x022\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id&5114138684830281544\x00)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G\x04\x98\x03\xb5\xce\xb1JC1\x14\x80\xe14\xf4u\xe2M\xd3\xe4\xe4\x9e\xa7\t\'9\'Vh\x11\xeeM\x05W7\x0b\x85N\xd2]|\x05\x9d\x04Gqq\xf7\x01:\x15w7E\xa8o\xe0\xfc\xc3\xcf\xb7\xd7\xddE\x91,\xc3ybj\xd4\xb1TZ/[\xd7dl\xe9T\xcar=6\x19\x92K\x0b\xaec\x82`{\x10.)\xf6\xd9&o\x1d\'B\x84\x14\xd8\x97\x18@\xb0F\xecV\xd2\xe8\xf7\x88\xb3\x1c1\xf8b\x18|6\x1eC0\xb9:1\x01\xca\x9c*VO\xd6\x99\x95=\xa3\xab\xe1\xf2s\xa1\xd4t\xfa\xb2\xf9x\x7f<\xbc=\\k\xa5^\'Ji\xb5\xffw%1\x10Z\x8b\xc6g\x88\xc6\x13G\x83\x16\xaca\xe7{&\x107G9)\x8f?J\xadw\xcf\xf7wO7\xb7_\xdb\xc9\x1f\xf3\x1b)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00K\xc8?\xd9\xdd\x91\xffC\xbe\x987[\xcd\xc4\xf4X\x02\x9c\x035\x8c\xbbR\xc30\x10EeEE\xaa\xc0\x8f\x08K\x89eyK\nz\xc8@\xbd\xa3\xc7\x1a\x98I\x01\xb6\xfc\x03t\x140T\x944\xe9R\xa4ME\xef_\xa2\x8b\x95!\xb7:\xf7\xee\x99\xe5\xfc\xebw\xfb}x{\xff\xfb(\x18c{^>\x07\xf2\xd4=bt\xc9\x95\x91Z7lR\x99\xa8Ox\xbe\x84\xcd\xd0\'\xeap\x89O\xb1\xed\xb16\xaa\xa9)\x06\xb4\x8dWX\xa9eD\x07P\xa3\x89U\xb0\xa6&h-\x94\xa7o*Gj\x90\xae\xb1+\xa8\xbc\x97\xe4\xac\x97Uc\x8c\xf4+\xed\xa5\x996\xe5JV\xbb\x8e\x87\xae\xbc_\x1cy\x94\xc5\x7f\xe7\x01\x1a7D\xb6\xc6\xcc\x01\n\xc3\xc2\x1c8\xa9\x05', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2F17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'3', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50075 http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro user:root, principal:None CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} Starting new HTTP connection (1): 172.16.2.2:50070 http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&7913632545671200363\x1cformat-version\x022\x1esequence-number\x023\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id&4370063500831834784\x000\xaa#\x86p-z\xb0\xd73p\xf5\xb5\x9f\xe3&\x06\xf2\x03\xbd\xd0\xb1JC1\x14\xc6\xf1\xf4R\xfa6\xf1\x9e\xa6\'\xc9=O\x13Nr\x12+\xb4\x08\xb7\xb7\x82\xab\x9b\x82\xe0$]\x9c\xc4W\xb0\x93\xe0(.\x82\x83\x83\xee\xba\x88\xce\xa5\x8bE(\xfa\x02:\x7f\xf0\xf1\xe3\xbf\xa8\xea\xbd\x94cnw\x83p\xc7\xb5\xe4\xc2\xf3IWwy\xd6\x85\xed\x92&\xf3Y\x97\xdb`\xc2X\xca,8\x0b\x8d\xcb\x92\x82o"\x04\x04#\x81\x89\\\xb0\x82\xc9[\x97\xa9x\xaa\xa7\xb9\xe3\xef\xc7\xa1\x97H\xa9D\xed\xc9\xa1\xc6\x02\xac#[\xd6\r\xba\x04\xc5\'0\xd6\xea)\xec\xf0A\xbb\xff>Vj0xZ]\xac\xdf\x96\x9f\x8f/\xbdJ\xa9\xbb\x9eR\x95Z\xfc9\x93\x86\xd1\x93\xc5\xa4\xc5a\xd4H\x1bT,&k\xeb\xd2\x88\x0b\x15d0[\xe6\xc7\x86\xd9\xef\xdf\x9e\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:06 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 404 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] INFO : MKDIRS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=MKDIRS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:06 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:06 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&5114138684830281544\x1cformat-version\x022\x1esequence-number\x021\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id\x08null\x00F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96\x02\xa8\x025\xccA\n\xc20\x10@\xd14\xf7\x89\x19b\x9a4\xa7\x19\xa6\x99\xa9\x15Z\x846\xf5\x00\xee\\\x08\xae\xc4\xbd\x87p\xe5\xde\xc3x\x01w\x8a\xe0\xf6?\xf8Wm\xb7YZ\x996\xc8T\xc8\xb2t\xb4\x0c\xc5\x16\x99\x0b\xfe%\x0f\xcb\\dB\x87=w3\x86\x1a\x9a \x9c16-\xa0\x07\xc7H)\x05\xac\xd9\xe7X\x07I]Lv\x94B\xbf#q\xa0\x04\x90\x8coC4\x9e8\x9a\x04\x01\x0c;\xdf0\x05q\xeb$f\x84\x15\xed\xa7\xdd\xabWJ\xeb\xf3\xe3v\xb9\x1f\x8e\xefS\xa5\x95zV\xdf\xa4>F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:06 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:06 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:06 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00K\xc8?\xd9\xdd\x91\xffC\xbe\x987[\xcd\xc4\xf4X\x02\x9c\x035\x8c\xbbR\xc30\x10EeEE\xaa\xc0\x8f\x08K\x89eyK\nz\xc8@\xbd\xa3\xc7\x1a\x98I\x01\xb6\xfc\x03t\x140T\x944\xe9R\xa4ME\xef_\xa2\x8b\x95!\xb7:\xf7\xee\x99\xe5\xfc\xebw\xfb}x{\xff\xfb(\x18c{^>\x07\xf2\xd4=bt\xc9\x95\x91Z7lR\x99\xa8Ox\xbe\x84\xcd\xd0\'\xeap\x89O\xb1\xed\xb16\xaa\xa9)\x06\xb4\x8dWX\xa9eD\x07P\xa3\x89U\xb0\xa6&h-\x94\xa7o*Gj\x90\xae\xb1+\xa8\xbc\x97\xe4\xac\x97Uc\x8c\xf4+\xed\xa5\x996\xe5\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:06 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:06 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:06 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:06 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:06 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:06 [ 670 ] DEBUG : Command to send: m d o331 e (clientserver.py:501, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Command to send: m d o330 e (clientserver.py:501, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Command to send: m d o307 e (clientserver.py:501, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Command to send: m d o352 e (clientserver.py:501, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Command to send: m d o359 e (clientserver.py:501, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Command to send: m d o361 e (clientserver.py:501, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:06 [ 670 ] DEBUG : Command to send: m d o360 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o347 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o397 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o404 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o407 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o412 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o417 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o425 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o431 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o438 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o443 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o410 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o411 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o413 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o414 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o415 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o416 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o418 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o419 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o420 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o421 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o422 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o424 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o426 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o427 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o428 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o429 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o430 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o432 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o433 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o434 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o435 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o436 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o437 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o439 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o440 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o441 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o442 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o444 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o445 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o448 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o449 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o452 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o457 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o465 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o471 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o478 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o450 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o451 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o453 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o454 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o455 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o456 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o458 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o459 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o460 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o461 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o462 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o464 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o466 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o467 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o468 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o469 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o470 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o472 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o473 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:06 GMT, Fri, 04 Apr 2025 18:15:06 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o474 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o476 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o477 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o479 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o480 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o481 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Command to send: m d o483 e (clientserver.py:501, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00N\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD\x02\x9e\x035\x8c\xadR\x031\x14\x85\xb3i\x04\n\xfa"\xe9&\xfb\x93Md\x05\x1e\x18\xd0w\x92\xdc\xa4\xedLE\xbb\x9b\x15}\x85\n\x1e\x01\x87A \xd1\xc8\x1d\x0c\x9e\x07\xe09h:\xf4\xa8\xef\xcc\xf9\xe6P\xfay\xfc\xf9\xfe\xf8\xfdz=\x10B\xdei\xb9\xf1\xc1\x85~\x05h\x93-1D;nS\x99\xc2\x90\xe0\xb2\xf8\xed8\xa4\xd0C\x05k\x8c\x03\xa8Vh\x15\xd0C\xa7\x9d\x80FT\x08\xd6\x18\x05-6\xbekU0\xb13\xe5\xf9M\xe4\xf0J\xf2\xdaG\xeb*\xa7y\xab\x9c\xe5\x8dT\xc8\x8d\xd7\x96G\xa3j\xe1\x11e\xa7k\x9ee\xb9\xd8\xd9~?\x86t}\xb7|\xb8\x7f\xba}\x9c\x8a\xe7\x1b\xca\xe8\xdb\x8c\xbd\xcc\xc8\t\xa6\x82ME\x06\xc2\x08\xa1\x19\xe6\xe4?\x8c\xcas\xf7\x97\xce\x8c99\xf4*\x8b\x7fN\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2F91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 2,\n "table-uuid" : "c91f0788-0ba8-4ca9-9925-2ab9a7a11eb7",\n "location" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79",\n "last-sequence-number" : 2,\n "last-updated-ms" : 1743790506942,\n "last-column-id" : 2,\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 4370063500831834784,\n "refs" : {\n "main" : {\n "snapshot-id" : 4370063500831834784,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "sequence-number" : 1,\n "snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506546,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro",\n "schema-id" : 0\n }, {\n "sequence-number" : 2,\n "snapshot-id" : 4370063500831834784,\n "parent-snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506942,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790506546,\n "snapshot-id" : 5114138684830281544\n }, {\n "timestamp-ms" : 1743790506942,\n "snapshot-id" : 4370063500831834784\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790506546,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&5114138684830281544\x1cformat-version\x022\x1esequence-number\x021\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id\x08null\x00F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96\x02\xa8\x025\xccA\n\xc20\x10@\xd14\xf7\x89\x19b\x9a4\xa7\x19\xa6\x99\xa9\x15Z\x846\xf5\x00\xee\\\x08\xae\xc4\xbd\x87p\xe5\xde\xc3x\x01w\x8a\xe0\xf6?\xf8Wm\xb7YZ\x996\xc8T\xc8\xb2t\xb4\x0c\xc5\x16\x99\x0b\xfe%\x0f\xcb\\dB\x87=w3\x86\x1a\x9a \x9c16-\xa0\x07\xc7H)\x05\xac\xd9\xe7X\x07I]Lv\x94B\xbf#q\xa0\x04\x90\x8coC4\x9e8\x9a\x04\x01\x0c;\xdf0\x05q\xeb$f\x84\x15\xed\xa7\xdd\xabWJ\xeb\xf3\xe3v\xb9\x1f\x8e\xefS\xa5\x95zV\xdf\xa4>F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&4370063500831834784\x1cformat-version\x022\x1esequence-number\x022\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id&5114138684830281544\x00)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G\x04\x98\x03\xb5\xce\xb1JC1\x14\x80\xe14\xf4u\xe2M\xd3\xe4\xe4\x9e\xa7\t\'9\'Vh\x11\xeeM\x05W7\x0b\x85N\xd2]|\x05\x9d\x04Gqq\xf7\x01:\x15w7E\xa8o\xe0\xfc\xc3\xcf\xb7\xd7\xddE\x91,\xc3ybj\xd4\xb1TZ/[\xd7dl\xe9T\xcar=6\x19\x92K\x0b\xaec\x82`{\x10.)\xf6\xd9&o\x1d\'B\x84\x14\xd8\x97\x18@\xb0F\xecV\xd2\xe8\xf7\x88\xb3\x1c1\xf8b\x18|6\x1eC0\xb9:1\x01\xca\x9c*VO\xd6\x99\x95=\xa3\xab\xe1\xf2s\xa1\xd4t\xfa\xb2\xf9x\x7f<\xbc=\\k\xa5^\'Ji\xb5\xffw%1\x10Z\x8b\xc6g\x88\xc6\x13G\x83\x16\xaca\xe7{&\x107G9)\x8f?J\xadw\xcf\xf7wO7\xb7_\xdb\xc9\x1f\xf3\x1b)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00K\xc8?\xd9\xdd\x91\xffC\xbe\x987[\xcd\xc4\xf4X\x02\x9c\x035\x8c\xbbR\xc30\x10EeEE\xaa\xc0\x8f\x08K\x89eyK\nz\xc8@\xbd\xa3\xc7\x1a\x98I\x01\xb6\xfc\x03t\x140T\x944\xe9R\xa4ME\xef_\xa2\x8b\x95!\xb7:\xf7\xee\x99\xe5\xfc\xebw\xfb}x{\xff\xfb(\x18c{^>\x07\xf2\xd4=bt\xc9\x95\x91Z7lR\x99\xa8Ox\xbe\x84\xcd\xd0\'\xeap\x89O\xb1\xed\xb16\xaa\xa9)\x06\xb4\x8dWX\xa9eD\x07P\xa3\x89U\xb0\xa6&h-\x94\xa7o*Gj\x90\xae\xb1+\xa8\xbc\x97\xe4\xac\x97Uc\x8c\xf4+\xed\xa5\x996\xe5\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-19-a87394bb-ea7b-4855-b31b-54bb0ab9edd1-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-23-668e9fa1-dc6c-4679-80eb-2f0de8fcb9cd-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'PAR1\x15\x00\x15\xc0\x0c\x15\xf6\x02\x15\xbd\xf3\xd4\x95\x06\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00-\xc5\xd7"\x02\x00\x00\x00\xc0\x90Y![$+\xb3\xb23C\x8a\xec\x8c\xca\x8e\xf2\xff\xff\xe0\xc1\xdd\xcb\x05\x02\xffZ\xdc\xea6\x07\xdd\xee\x0ew\xba\xcb\xdd\xeeq\xc8aG\xdc\xeb>\xf7;\xea\x01\x0fz\xc8\xc3\x1e\xf1\xa8\xc7<\xee\x98\'<\xe9\xb8\xa7\x9c\xf0\xb4g<\xeb9\xcf;\xe9\x05/z\xc9\xcb^\xf1\xaaSN;\xe35\xaf{\xc3\x9b\xde\xf2\xb6w\x9c\xf5\xae\xf7\xbc\xef\x03\x1f\xfa\xc89\x1f\xfb\xc4\xa7\xce\xfb\xcc\x05\x17}\xee\x0b\x97|\xe9+_\xfb\xc6\xb7\xbe\xf3\xbd\xcb~\xf0\xa3\x9f\xfc\xec\x8a\xab\xae\xf9\xc5\xaf~\xf3\xbb?\xfc\xe9/\xd7\xfd\xed\x1f7\xdc\xf4\xaf\xff\x00\x02\xc2\xe7q \x03\x00\x00\x15\x00\x15\xa0\t\x15\xea\x02\x15\x8d\xbc\xbf\xb8\t\x1c\x15\xc8\x01\x15\x00\x15\x08\x15\x08\x00\x00\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\x1d\xd2\xb1\x81B1\x0cD\xc1\x0f\xd7\x10X\xb2$\xf7\xdf\x18\xe7I^\xb4\x93\xed\xeby\x9e\xef\xeb?\xeb&n\xf2f\xdf\xd4M\xdf\xcc\xcdy\xdf\xf1G\xbf\xba44uki\xeb(\xbb\xd8\xc5.v\xb1\x8b]\xecb\x17\xbb\xd8\xc5\x06\x1bl\xb0\xc1\x06\x1bl\xb0\xc1\x06\x1bl\xb2\xc9&\x9bl\xb2\xc9&\x9bl\xb2\xc9nv\xb3\x9b\xdd\xecf7\xbb\xd9\xcdnv\xb3\xc5\x16[l\xb1\xc5\x16[l\xb1\xc5\x16\xdbl\xb3\xcd6\xdbl\xb3\xcd6\xdbl\xb3\xc3\x0e;\xec\xb0\xc3\x0e;\xec\xb0\xc3\x0e{\xd8\xc3\x1e\xf6\xb0\x87=\xeca\x0f{\xd8s\xfe|\xe3\xf3\x03\xd4\xdb\x86\xadP\x02\x00\x00\x19\x11\x02\x19\x18\x08\x00\x00\x00\x00\x00\x00\x00\x00\x19\x18\x08c\x00\x00\x00\x00\x00\x00\x00\x15\x02\x19\x16\x00\x00\x19\x11\x02\x19\x18\x011\x19\x18\x0299\x15\x02\x19\x16\x00\x00\x19\x1c\x16\x08\x15\xaa\x03\x16\x00\x00\x00\x19\x1c\x16\xb2\x03\x15\x9e\x03\x16\x00\x00\x00\x15\x02\x19\x00&\xb2\x03\x1c\x15\x0c\x19%\x00\x08\x19\x18\x01b\x15\x04\x16\xc8\x01\x16\xd4\t\x16\x9e\x03&\xb2\x03<6\x00(\x0299\x18\x011\x00\x19\x1c\x15\x00\x15\x00\x15\x02\x00\x00\x16\xc8\x07\x15\x18\x16\x8e\x07\x15$\x00\x16\xc8\x16\x16\xc8\x01&\x08\x16\xc8\x06\x14\x00\x00\x19\x1c\x18\x0eiceberg.schema\x18\x90\x01{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":true,"type":"long"},{"id":2,"name":"b","required":true,"type":"string"}]}\x00\x18Jparquet-mr version 1.12.3 (build f8dced182c4c1fbdec6ccb3185537b5a01e6ed6b)\x19,\x1c\x00\x00\x1c\x00\x00\x00\xcf\x01\x00\x00PAR1', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fdata%2F00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/data/00000-21-3cfab2b8-56ba-416d-9c8a-f9630cdd1783-00001.parquet', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00N\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD\x02\x9e\x035\x8c\xadR\x031\x14\x85\xb3i\x04\n\xfa"\xe9&\xfb\x93Md\x05\x1e\x18\xd0w\x92\xdc\xa4\xedLE\xbb\x9b\x15}\x85\n\x1e\x01\x87A \xd1\xc8\x1d\x0c\x9e\x07\xe09h:\xf4\xa8\xef\xcc\xf9\xe6P\xfay\xfc\xf9\xfe\xf8\xfdz=\x10B\xdei\xb9\xf1\xc1\x85~\x05h\x93-1D;nS\x99\xc2\x90\xe0\xb2\xf8\xed8\xa4\xd0C\x05k\x8c\x03\xa8Vh\x15\xd0C\xa7\x9d\x80FT\x08\xd6\x18\x05-6\xbekU0\xb13\xe5\xf9M\xe4\xf0J\xf2\xdaG\xeb*\xa7y\xab\x9c\xe5\x8dT\xc8\x8d\xd7\x96G\xa3j\xe1\x11e\xa7k\x9ee\xb9\xd8\xd9~?\x86t}\xb7|\xb8\x7f\xba}\x9c\x8a\xe7\x1b\xca\xe8\xdb\x8c\xbd\xcc\xc8\t\xa6\x82ME\x06\xc2\x08\xa1\x19\xe6\xe4?\x8c\xcas\xf7\x97\xce\x8c99\xf4*\x8b\x7fN\xa1\x91W#4\x9a$\xc4\x02\x0f\xa1N\xd6\xebD', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2F91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/91b7954c-d64b-4955-bf2e-56c3af9f4a02-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'{\n "format-version" : 2,\n "table-uuid" : "c91f0788-0ba8-4ca9-9925-2ab9a7a11eb7",\n "location" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79",\n "last-sequence-number" : 2,\n "last-updated-ms" : 1743790506942,\n "last-column-id" : 2,\n "current-schema-id" : 0,\n "schemas" : [ {\n "type" : "struct",\n "schema-id" : 0,\n "fields" : [ {\n "id" : 1,\n "name" : "a",\n "required" : false,\n "type" : "long"\n }, {\n "id" : 2,\n "name" : "b",\n "required" : false,\n "type" : "string"\n } ]\n } ],\n "default-spec-id" : 0,\n "partition-specs" : [ {\n "spec-id" : 0,\n "fields" : [ ]\n } ],\n "last-partition-id" : 999,\n "default-sort-order-id" : 0,\n "sort-orders" : [ {\n "order-id" : 0,\n "fields" : [ ]\n } ],\n "properties" : {\n "owner" : "root"\n },\n "current-snapshot-id" : 4370063500831834784,\n "refs" : {\n "main" : {\n "snapshot-id" : 4370063500831834784,\n "type" : "branch"\n }\n },\n "snapshots" : [ {\n "sequence-number" : 1,\n "snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506546,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "100",\n "total-files-size" : "967",\n "total-data-files" : "1",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro",\n "schema-id" : 0\n }, {\n "sequence-number" : 2,\n "snapshot-id" : 4370063500831834784,\n "parent-snapshot-id" : 5114138684830281544,\n "timestamp-ms" : 1743790506942,\n "summary" : {\n "operation" : "append",\n "spark.app.id" : "local-1743790492634",\n "added-data-files" : "1",\n "added-records" : "100",\n "added-files-size" : "967",\n "changed-partition-count" : "1",\n "total-records" : "200",\n "total-files-size" : "1934",\n "total-data-files" : "2",\n "total-delete-files" : "0",\n "total-position-deletes" : "0",\n "total-equality-deletes" : "0"\n },\n "manifest-list" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro",\n "schema-id" : 0\n } ],\n "statistics" : [ ],\n "snapshot-log" : [ {\n "timestamp-ms" : 1743790506546,\n "snapshot-id" : 5114138684830281544\n }, {\n "timestamp-ms" : 1743790506942,\n "snapshot-id" : 4370063500831834784\n } ],\n "metadata-log" : [ {\n "timestamp-ms" : 1743790506546,\n "metadata-file" : "/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v1.metadata.json"\n } ]\n}', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fv2.metadata.json&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/v2.metadata.json', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&5114138684830281544\x1cformat-version\x022\x1esequence-number\x021\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id\x08null\x00F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96\x02\xa8\x025\xccA\n\xc20\x10@\xd14\xf7\x89\x19b\x9a4\xa7\x19\xa6\x99\xa9\x15Z\x846\xf5\x00\xee\\\x08\xae\xc4\xbd\x87p\xe5\xde\xc3x\x01w\x8a\xe0\xf6?\xf8Wm\xb7YZ\x996\xc8T\xc8\xb2t\xb4\x0c\xc5\x16\x99\x0b\xfe%\x0f\xcb\\dB\x87=w3\x86\x1a\x9a \x9c16-\xa0\x07\xc7H)\x05\xac\xd9\xe7X\x07I]Lv\x94B\xbf#q\xa0\x04\x90\x8coC4\x9e8\x9a\x04\x01\x0c;\xdf0\x05q\xeb$f\x84\x15\xed\xa7\xdd\xabWJ\xeb\xf3\xe3v\xb9\x1f\x8e\xefS\xa5\x95zV\xdf\xa4>F\xef\x15\xa6\xba\xc3\x1c\'\xf3:W:\xf1n\x85\x96', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-5114138684830281544-1-ad6a9009-4b67-4ad7-9060-d248da6e239e.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&4370063500831834784\x1cformat-version\x022\x1esequence-number\x022\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id&5114138684830281544\x00)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G\x04\x98\x03\xb5\xce\xb1JC1\x14\x80\xe14\xf4u\xe2M\xd3\xe4\xe4\x9e\xa7\t\'9\'Vh\x11\xeeM\x05W7\x0b\x85N\xd2]|\x05\x9d\x04Gqq\xf7\x01:\x15w7E\xa8o\xe0\xfc\xc3\xcf\xb7\xd7\xddE\x91,\xc3ybj\xd4\xb1TZ/[\xd7dl\xe9T\xcar=6\x19\x92K\x0b\xaec\x82`{\x10.)\xf6\xd9&o\x1d\'B\x84\x14\xd8\x97\x18@\xb0F\xecV\xd2\xe8\xf7\x88\xb3\x1c1\xf8b\x18|6\x1eC0\xb9:1\x01\xca\x9c*VO\xd6\x99\x95=\xa3\xab\xe1\xf2s\xa1\xd4t\xfa\xb2\xf9x\x7f<\xbc=\\k\xa5^\'Ji\xb5\xffw%1\x10Z\x8b\xc6g\x88\xc6\x13G\x83\x16\xaca\xe7{&\x107G9)\x8f?J\xadw\xcf\xf7wO7\xb7_\xdb\xc9\x1f\xf3\x1b)\xa7Vyd7W\x90\x80\xfd\xbbip\x83\xe5G', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fsnap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-4370063500831834784-1-91b7954c-d64b-4955-bf2e-56c3af9f4a02.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/ad6a9009-4b67-4ad7-9060-d248da6e239e-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x10\x0cschema\xa4\x02{"type":"struct","schema-id":0,"fields":[{"id":1,"name":"a","required":false,"type":"long"},{"id":2,"name":"b","required":false,"type":"string"}]}\x16avro.schema\x925{"type":"record","name":"manifest_entry","fields":[{"name":"status","type":"int","field-id":0},{"name":"snapshot_id","type":["null","long"],"default":null,"field-id":1},{"name":"sequence_number","type":["null","long"],"default":null,"field-id":3},{"name":"file_sequence_number","type":["null","long"],"default":null,"field-id":4},{"name":"data_file","type":{"type":"record","name":"r2","fields":[{"name":"content","type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes","field-id":134},{"name":"file_path","type":"string","doc":"Location URI with FS scheme","field-id":100},{"name":"file_format","type":"string","doc":"File format name: avro, orc, or parquet","field-id":101},{"name":"partition","type":{"type":"record","name":"r102","fields":[]},"doc":"Partition data tuple, schema based on the partition spec","field-id":102},{"name":"record_count","type":"long","doc":"Number of records in the file","field-id":103},{"name":"file_size_in_bytes","type":"long","doc":"Total file size in bytes","field-id":104},{"name":"column_sizes","type":["null",{"type":"array","items":{"type":"record","name":"k117_v118","fields":[{"name":"key","type":"int","field-id":117},{"name":"value","type":"long","field-id":118}]},"logicalType":"map"}],"doc":"Map of column id to total size on disk","default":null,"field-id":108},{"name":"value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k119_v120","fields":[{"name":"key","type":"int","field-id":119},{"name":"value","type":"long","field-id":120}]},"logicalType":"map"}],"doc":"Map of column id to total count, including null and NaN","default":null,"field-id":109},{"name":"null_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k121_v122","fields":[{"name":"key","type":"int","field-id":121},{"name":"value","type":"long","field-id":122}]},"logicalType":"map"}],"doc":"Map of column id to null value count","default":null,"field-id":110},{"name":"nan_value_counts","type":["null",{"type":"array","items":{"type":"record","name":"k138_v139","fields":[{"name":"key","type":"int","field-id":138},{"name":"value","type":"long","field-id":139}]},"logicalType":"map"}],"doc":"Map of column id to number of NaN values in the column","default":null,"field-id":137},{"name":"lower_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k126_v127","fields":[{"name":"key","type":"int","field-id":126},{"name":"value","type":"bytes","field-id":127}]},"logicalType":"map"}],"doc":"Map of column id to lower bound","default":null,"field-id":125},{"name":"upper_bounds","type":["null",{"type":"array","items":{"type":"record","name":"k129_v130","fields":[{"name":"key","type":"int","field-id":129},{"name":"value","type":"bytes","field-id":130}]},"logicalType":"map"}],"doc":"Map of column id to upper bound","default":null,"field-id":128},{"name":"key_metadata","type":["null","bytes"],"doc":"Encryption key metadata blob","default":null,"field-id":131},{"name":"split_offsets","type":["null",{"type":"array","items":"long","element-id":133}],"doc":"Splittable offsets","default":null,"field-id":132},{"name":"equality_ids","type":["null",{"type":"array","items":"int","element-id":136}],"doc":"Equality comparison field IDs","default":null,"field-id":135},{"name":"sort_order_id","type":["null","int"],"doc":"Sort order ID","default":null,"field-id":140}]},"field-id":2}]}\x14avro.codec\x0edeflate\x1cformat-version\x022"partition-spec-id\x020\x1ciceberg.schema\xca+{"type":"struct","schema-id":0,"fields":[{"id":0,"name":"status","required":true,"type":"int"},{"id":1,"name":"snapshot_id","required":false,"type":"long"},{"id":3,"name":"sequence_number","required":false,"type":"long"},{"id":4,"name":"file_sequence_number","required":false,"type":"long"},{"id":2,"name":"data_file","required":true,"type":{"type":"struct","fields":[{"id":134,"name":"content","required":true,"type":"int","doc":"Contents of the file: 0=data, 1=position deletes, 2=equality deletes"},{"id":100,"name":"file_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":101,"name":"file_format","required":true,"type":"string","doc":"File format name: avro, orc, or parquet"},{"id":102,"name":"partition","required":true,"type":{"type":"struct","fields":[]},"doc":"Partition data tuple, schema based on the partition spec"},{"id":103,"name":"record_count","required":true,"type":"long","doc":"Number of records in the file"},{"id":104,"name":"file_size_in_bytes","required":true,"type":"long","doc":"Total file size in bytes"},{"id":108,"name":"column_sizes","required":false,"type":{"type":"map","key-id":117,"key":"int","value-id":118,"value":"long","value-required":true},"doc":"Map of column id to total size on disk"},{"id":109,"name":"value_counts","required":false,"type":{"type":"map","key-id":119,"key":"int","value-id":120,"value":"long","value-required":true},"doc":"Map of column id to total count, including null and NaN"},{"id":110,"name":"null_value_counts","required":false,"type":{"type":"map","key-id":121,"key":"int","value-id":122,"value":"long","value-required":true},"doc":"Map of column id to null value count"},{"id":137,"name":"nan_value_counts","required":false,"type":{"type":"map","key-id":138,"key":"int","value-id":139,"value":"long","value-required":true},"doc":"Map of column id to number of NaN values in the column"},{"id":125,"name":"lower_bounds","required":false,"type":{"type":"map","key-id":126,"key":"int","value-id":127,"value":"binary","value-required":true},"doc":"Map of column id to lower bound"},{"id":128,"name":"upper_bounds","required":false,"type":{"type":"map","key-id":129,"key":"int","value-id":130,"value":"binary","value-required":true},"doc":"Map of column id to upper bound"},{"id":131,"name":"key_metadata","required":false,"type":"binary","doc":"Encryption key metadata blob"},{"id":132,"name":"split_offsets","required":false,"type":{"type":"list","element-id":133,"element":"long","element-required":true},"doc":"Splittable offsets"},{"id":135,"name":"equality_ids","required":false,"type":{"type":"list","element-id":136,"element":"int","element-required":true},"doc":"Equality comparison field IDs"},{"id":140,"name":"sort_order_id","required":false,"type":"int","doc":"Sort order ID"}]}}]}\x1cpartition-spec\x04[]\x0econtent\x08data\x00K\xc8?\xd9\xdd\x91\xffC\xbe\x987[\xcd\xc4\xf4X\x02\x9c\x035\x8c\xbbR\xc30\x10EeEE\xaa\xc0\x8f\x08K\x89eyK\nz\xc8@\xbd\xa3\xc7\x1a\x98I\x01\xb6\xfc\x03t\x140T\x944\xe9R\xa4ME\xef_\xa2\x8b\x95!\xb7:\xf7\xee\x99\xe5\xfc\xebw\xfb}x{\xff\xfb(\x18c{^>\x07\xf2\xd4=bt\xc9\x95\x91Z7lR\x99\xa8Ox\xbe\x84\xcd\xd0\'\xeap\x89O\xb1\xed\xb16\xaa\xa9)\x06\xb4\x8dWX\xa9eD\x07P\xa3\x89U\xb0\xa6&h-\x94\xa7o*Gj\x90\xae\xb1+\xa8\xbc\x97\xe4\xac\x97Uc\x8c\xf4+\xed\xa5\x996\xe5JV\xbb\x8e\x87\xae\xbc_\x1cy\x94\xc5\x7f\xe7\x01\x1a7D\xb6\xc6\xcc\x01\n\xc3\xc2\x1c8\xa9\x05', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2F17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/17db9cfb-7964-4f0a-ba5a-846c0f7c0255-m0.avro', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'3', 'headers': {'content-type': 'text/plain', 'host': '172.16.2.2'}, 'params': {'file': '/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text', 'user.name': 'root'}, 'allow_redirects': False, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50075 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50075 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true&file=%2Ficeberg_data%2Fdefault%2Ftest_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79%2Fmetadata%2Fversion-hint.text&user.name=root HTTP/1.1" 201 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : b'' {'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'webhdfs://hdfs1:9000/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/version-hint.text', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:309, write_file) 2025-04-04 18:15:07 [ 670 ] INFO : GETFILESTATUS /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata user.name=root 172.16.2.2:50070 (__init__.py:412, _request) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "GET /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata?user.name=root&op=GETFILESTATUS HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : write_data protocol:http host:hdfs1 port:50070 path: /iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro user:root, principal:None (hdfs_api.py:256, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50070/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE', 'allow_redirects': False, 'headers': {'host': '172.16.2.2'}, 'params': {'overwrite': 'true'}, 'verify': False, 'auth': None} (hdfs_api.py:131, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : Starting new HTTP connection (1): 172.16.2.2:50070 (connectionpool.py:245, _new_conn) 2025-04-04 18:15:07 [ 670 ] DEBUG : http://172.16.2.2:50070 "PUT /webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&overwrite=true HTTP/1.1" 307 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:07 [ 670 ] DEBUG : response_data:b'' headers:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:133, req_wrapper) 2025-04-04 18:15:07 [ 670 ] DEBUG : HDFS api response:{'Cache-Control': 'no-cache', 'Expires': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Date': 'Fri, 04 Apr 2025 18:15:07 GMT, Fri, 04 Apr 2025 18:15:07 GMT', 'Pragma': 'no-cache, no-cache', 'Content-Type': 'application/octet-stream', 'Location': 'http://hdfs1:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'Content-Length': '0', 'Server': 'Jetty(6.1.26)'} (hdfs_api.py:284, write_file) 2025-04-04 18:15:07 [ 670 ] DEBUG : CALL: {'url': 'http://172.16.2.2:50075/webhdfs/v1/iceberg_data/default/test_iceberg_cluster_2_hdfs_65086edc_78b0_402d_a996_5d4c756e9f79/metadata/snap-7913632545671200363-1-17db9cfb-7964-4f0a-ba5a-846c0f7c0255.avro?op=CREATE&namenoderpcaddress=hdfs1:9000&overwrite=true', 'data': b'Obj\x01\x0e\x16avro.schema\x9e\x1e{"type":"record","name":"manifest_file","fields":[{"name":"manifest_path","type":"string","doc":"Location URI with FS scheme","field-id":500},{"name":"manifest_length","type":"long","doc":"Total file size in bytes","field-id":501},{"name":"partition_spec_id","type":"int","doc":"Spec ID used to write","field-id":502},{"name":"content","type":"int","doc":"Contents of the manifest: 0=data, 1=deletes","field-id":517},{"name":"sequence_number","type":"long","doc":"Sequence number when the manifest was added","field-id":515},{"name":"min_sequence_number","type":"long","doc":"Lowest sequence number in the manifest","field-id":516},{"name":"added_snapshot_id","type":"long","doc":"Snapshot ID that added the manifest","field-id":503},{"name":"added_data_files_count","type":"int","doc":"Added entry count","field-id":504},{"name":"existing_data_files_count","type":"int","doc":"Existing entry count","field-id":505},{"name":"deleted_data_files_count","type":"int","doc":"Deleted entry count","field-id":506},{"name":"added_rows_count","type":"long","doc":"Added rows count","field-id":512},{"name":"existing_rows_count","type":"long","doc":"Existing rows count","field-id":513},{"name":"deleted_rows_count","type":"long","doc":"Deleted rows count","field-id":514},{"name":"partitions","type":["null",{"type":"array","items":{"type":"record","name":"r508","fields":[{"name":"contains_null","type":"boolean","doc":"True if any file has a null partition value","field-id":509},{"name":"contains_nan","type":["null","boolean"],"doc":"True if any file has a nan partition value","default":null,"field-id":518},{"name":"lower_bound","type":["null","bytes"],"doc":"Partition lower bound for all files","default":null,"field-id":510},{"name":"upper_bound","type":["null","bytes"],"doc":"Partition upper bound for all files","default":null,"field-id":511}]},"element-id":508}],"doc":"Summary for each partition","default":null,"field-id":507}]}\x14avro.codec\x0edeflate\x16snapshot-id&7913632545671200363\x1cformat-version\x022\x1esequence-number\x023\x1ciceberg.schema\xd6\x1f{"type":"struct","schema-id":0,"fields":[{"id":500,"name":"manifest_path","required":true,"type":"string","doc":"Location URI with FS scheme"},{"id":501,"name":"manifest_length","required":true,"type":"long","doc":"Total file size in bytes"},{"id":502,"name":"partition_spec_id","required":true,"type":"int","doc":"Spec ID used to write"},{"id":517,"name":"content","required":true,"type":"int","doc":"Contents of the manifest: 0=data, 1=deletes"},{"id":515,"name":"sequence_number","required":true,"type":"long","doc":"Sequence number when the manifest was added"},{"id":516,"name":"min_sequence_number","required":true,"type":"long","doc":"Lowest sequence number in the manifest"},{"id":503,"name":"added_snapshot_id","required":true,"type":"long","doc":"Snapshot ID that added the manifest"},{"id":504,"name":"added_data_files_count","required":true,"type":"int","doc":"Added entry count"},{"id":505,"name":"existing_data_files_count","required":true,"type":"int","doc":"Existing entry count"},{"id":506,"name":"deleted_data_files_count","required":true,"type":"int","doc":"Deleted entry count"},{"id":512,"name":"added_rows_count","required":true,"type":"long","doc":"Added rows count"},{"id":513,"name":"existing_rows_count","required":true,"type":"long","doc":"Existing rows count"},{"id":514,"name":"deleted_rows_count","required":true,"type":"long","doc":"Deleted rows count"},{"id":507,"name":"partitions","required":false,"type":{"type":"list","element-id":508,"element":{"type":"struct","fields":[{"id":509,"name":"contains_null","required":true,"type":"boolean","doc":"True if any file has a null partition value"},{"id":518,"name":"contains_nan","required":false,"type":"boolean","doc":"True if any file has a nan partition value"},{"id":510,"name":"lower_bound","required":false,"type":"binary","doc":"Partition lower bound for all files"},{"id":511,"name":"upper_bound","required":false,"type":"binary","doc":"Partition upper bound for all files"}]},"element-required":true},"doc":"Summary for each partition"}]}$parent-snapshot-id&4370063500831834784\x000\xaa#\x86p-z\xb0\xd73p\xf5\xb5\x9f\xe3&\x06\xf2\x03\xbd\xd0\xb1JC1\x14\xc6\xf1\xf4R\xfa6\xf1\x9e\xa6\'\xc9=O\x13Nr\x12+\xb4\x08\xb7\xb7\x82\xab\x9b\x82\xe0$]\x9c\xc4W\xb0\x93\xe0(.\x82\x83\x83\xee\xba\x88\xce\xa5\x8bE(\xfa\x02:\x7f\xf0\xf1\xe3\xbf\xa8\xea\xbd\x94cnw\x83p\xc7\xb5\xe4\xc2\xf3IWwy\xd6\x85\xed\x92&\xf3Y\x97\xdb`\xc2X\xca,8\x0b\x8d\xcb\x92\x82o"\x04\x04#\x81\x89\\\xb0\x82\xc9[\x97\xa9x\xaa\xa7\xb9\xe3\xef\xc7\xa1\x97H\xa9D\xed\xc9\xa1\xc6\x02\xac#[\xd6\r\xba\x04\xc5\'0\xd6\xea)\xec\xf0A\xbb\xff>Vj0xZ]\xac\xdf\x96\x9f\x8f/\xbdJ\xa9\xbb\x9eR\x95Z\xfc9\x93\x86\xd1\x93\xc5\xa4\xc5a\xd4H\x1bT,&k\xeb\xd2\x88\x0b\x15d0[\xe6\xc7\x86\xd9\xef\xdf\x9e format_version = '1', storage_type = 's3' @pytest.mark.parametrize("format_version", ["1", "2"]) @pytest.mark.parametrize("storage_type", ["s3", "azure", "hdfs"]) def test_cluster_table_function(started_cluster, format_version, storage_type): if is_arm() and storage_type == "hdfs": pytest.skip("Disabled test IcebergHDFS for aarch64") instance = started_cluster.instances["node1"] spark = started_cluster.spark_session TABLE_NAME = ( "test_iceberg_cluster_" + format_version + "_" + storage_type + "_" + get_uuid_str() ) def add_df(mode): write_iceberg_from_df( spark, generate_data(spark, 0, 100), TABLE_NAME, mode=mode, format_version=format_version, ) files = default_upload_directory( started_cluster, storage_type, f"/iceberg_data/default/{TABLE_NAME}/", f"/iceberg_data/default/{TABLE_NAME}/", ) logging.info(f"Adding another dataframe. result files: {files}") return files files = add_df(mode="overwrite") for i in range(1, len(started_cluster.instances)): files = add_df(mode="append") logging.info(f"Setup complete. files: {files}") assert len(files) == 5 + 4 * (len(started_cluster.instances) - 1) clusters = instance.query(f"SELECT * FROM system.clusters") logging.info(f"Clusters setup: {clusters}") # Regular Query only node1 table_function_expr = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True ) select_regular = ( instance.query(f"SELECT * FROM {table_function_expr}").strip().split() ) # Cluster Query with node1 as coordinator table_function_expr_cluster = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True, run_on_cluster=True, ) query_id_cluster = str(uuid.uuid4()) select_cluster = ( instance.query( f"SELECT * FROM {table_function_expr_cluster}", query_id=query_id_cluster ) .strip() .split() ) # Cluster Query with node1 as coordinator with alternative syntax query_id_cluster_alt_syntax = str(uuid.uuid4()) select_cluster_alt_syntax = ( instance.query( f""" SELECT * FROM {table_function_expr} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_cluster_alt_syntax, ) .strip() .split() ) create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster, object_storage_cluster='cluster_simple') query_id_cluster_table_engine = str(uuid.uuid4()) select_cluster_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_cluster_table_engine, ) .strip() .split() ) select_remote_cluster = ( instance.query(f"SELECT * FROM remote('node2',{table_function_expr_cluster})") .strip() .split() ) instance.query(f"DROP TABLE IF EXISTS `{TABLE_NAME}` SYNC") create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster) query_id_pure_table_engine = str(uuid.uuid4()) select_pure_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_pure_table_engine, ) .strip() .split() ) query_id_pure_table_engine_cluster = str(uuid.uuid4()) select_pure_table_engine_cluster = ( instance.query( f""" SELECT * FROM {TABLE_NAME} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_pure_table_engine_cluster, ) .strip() .split() ) # Simple size check assert len(select_regular) == 600 assert len(select_cluster) == 600 assert len(select_cluster_alt_syntax) == 600 > assert len(select_cluster_table_engine) == 600 E AssertionError: assert 1800 == 600 E + where 1800 = len(['0', '1', '1', '2', '2', '3', ...]) test_storage_iceberg/test.py:747: AssertionError ----------------------------- Captured stdout call ----------------------------- 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} Upload to bucket: None {} Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} Upload to bucket: None Upload to bucket: None {} Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} Upload to bucket: None Upload to bucket: None Upload to bucket: None {} Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None ----------------------------- Captured stderr call ----------------------------- Command to send: c o50 sc e Answer received: !yro526 Command to send: c o526 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro527 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo528 Command to send: c o528 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro528 e Answer received: !yro529 Command to send: c o527 toDF ro529 e Answer received: !yro530 Command to send: c o50 sc e Answer received: !yro531 Command to send: c o531 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro532 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo533 Command to send: c o533 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro533 e Answer received: !yro534 Command to send: c o532 toDF ro534 e Answer received: !yro535 Command to send: c o535 apply sb e Answer received: !yro536 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro537 Command to send: c o537 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro538 Command to send: c o538 get e Answer received: !yro539 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro540 Command to send: i java.util.HashMap e Answer received: !yao541 Command to send: c o540 applyModifiableSettings ro539 ro541 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro542 Command to send: c o536 cast ro542 e Answer received: !yro543 Command to send: c o535 withColumn sb ro543 e Answer received: !yro544 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro545 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro546 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo547 Command to send: c o547 add ro546 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro547 e Answer received: !yro548 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro548 e Answer received: !yro549 Command to send: c o545 over ro549 e Answer received: !yro550 Command to send: c o530 withColumn srow_index ro550 e Answer received: !yro551 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro552 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro553 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo554 Command to send: c o554 add ro553 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro554 e Answer received: !yro555 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro555 e Answer received: !yro556 Command to send: c o552 over ro556 e Answer received: !yro557 Command to send: c o544 withColumn srow_index ro557 e Answer received: !yro558 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo559 Command to send: c o559 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro559 e Answer received: !yro560 Command to send: c o551 join ro558 ro560 sinner e Answer received: !yro561 Command to send: c o561 drop srow_index e Answer received: !yro562 Command to send: c o562 writeTo stest_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 e Answer received: !yro563 Command to send: c o563 tableProperty sformat-version s1 e Answer received: !yro564 Command to send: c o563 using siceberg e Answer received: !yro565 Command to send: c o563 create e Answer received: !yv http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json HTTP/1.1" 200 0 Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro566 Command to send: c o566 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro567 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo568 Command to send: c o568 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro568 e Answer received: !yro569 Command to send: c o567 toDF ro569 e Answer received: !yro570 Command to send: c o50 sc e Answer received: !yro571 Command to send: c o571 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro572 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo573 Command to send: c o573 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro573 e Answer received: !yro574 Command to send: c o572 toDF ro574 e Answer received: !yro575 Command to send: c o575 apply sb e Answer received: !yro576 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro577 Command to send: c o577 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro578 Command to send: c o578 get e Answer received: !yro579 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro580 Command to send: i java.util.HashMap e Answer received: !yao581 Command to send: c o580 applyModifiableSettings ro579 ro581 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro582 Command to send: c o576 cast ro582 e Answer received: !yro583 Command to send: c o575 withColumn sb ro583 e Answer received: !yro584 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro585 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro586 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo587 Command to send: c o587 add ro586 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro587 e Answer received: !yro588 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro588 e Answer received: !yro589 Command to send: c o585 over ro589 e Answer received: !yro590 Command to send: c o570 withColumn srow_index ro590 e Answer received: !yro591 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro592 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro593 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo594 Command to send: c o594 add ro593 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro594 e Answer received: !yro595 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro595 e Answer received: !yro596 Command to send: c o592 over ro596 e Answer received: !yro597 Command to send: c o584 withColumn srow_index ro597 e Answer received: !yro598 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo599 Command to send: c o599 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro599 e Answer received: !yro600 Command to send: c o591 join ro598 ro600 sinner e Answer received: !yro601 Command to send: c o601 drop srow_index e Answer received: !yro602 Command to send: c o602 writeTo stest_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 e Answer received: !yro603 Command to send: c o603 append e Answer received: !yv http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json HTTP/1.1" 200 0 Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro604 Command to send: c o604 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro605 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo606 Command to send: c o606 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro606 e Answer received: !yro607 Command to send: c o605 toDF ro607 e Answer received: !yro608 Command to send: c o50 sc e Answer received: !yro609 Command to send: c o609 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro610 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo611 Command to send: c o611 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro611 e Answer received: !yro612 Command to send: c o610 toDF ro612 e Answer received: !yro613 Command to send: c o613 apply sb e Answer received: !yro614 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro615 Command to send: c o615 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro616 Command to send: c o616 get e Answer received: !yro617 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro618 Command to send: i java.util.HashMap e Answer received: !yao619 Command to send: c o618 applyModifiableSettings ro617 ro619 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro620 Command to send: c o614 cast ro620 e Answer received: !yro621 Command to send: c o613 withColumn sb ro621 e Answer received: !yro622 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro623 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro624 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo625 Command to send: c o625 add ro624 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro625 e Answer received: !yro626 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro626 e Answer received: !yro627 Command to send: c o623 over ro627 e Answer received: !yro628 Command to send: c o608 withColumn srow_index ro628 e Answer received: !yro629 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro630 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro631 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo632 Command to send: c o632 add ro631 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro632 e Answer received: !yro633 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro633 e Answer received: !yro634 Command to send: c o630 over ro634 e Answer received: !yro635 Command to send: c o622 withColumn srow_index ro635 e Answer received: !yro636 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo637 Command to send: c o637 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro637 e Answer received: !yro638 Command to send: c o629 join ro636 ro638 sinner e Answer received: !yro639 Command to send: c o639 drop srow_index e Answer received: !yro640 Command to send: c o640 writeTo stest_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 e Answer received: !yro641 Command to send: c o641 append e Answer received: !yv http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-29-4bafdff8-a4e7-4e8b-a900-65d629d30194-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7265914511372071741-1-9fb5070b-1811-4783-8530-ce060847cac4.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9fb5070b-1811-4783-8530-ce060847cac4-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v3.metadata.json HTTP/1.1" 200 0 Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-29-4bafdff8-a4e7-4e8b-a900-65d629d30194-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7265914511372071741-1-9fb5070b-1811-4783-8530-ce060847cac4.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9fb5070b-1811-4783-8530-ce060847cac4-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v3.metadata.json'] Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-29-4bafdff8-a4e7-4e8b-a900-65d629d30194-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7265914511372071741-1-9fb5070b-1811-4783-8530-ce060847cac4.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9fb5070b-1811-4783-8530-ce060847cac4-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v3.metadata.json'] Executing query SELECT * FROM system.clusters on node1 Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') on node1 Command to send: m d o525 e Answer received: !yv Command to send: m d o447 e Answer received: !yv Command to send: m d o446 e Answer received: !yv Command to send: m d o423 e Answer received: !yv Command to send: m d o475 e Answer received: !yv Command to send: m d o482 e Answer received: !yv Command to send: m d o463 e Answer received: !yv Command to send: m d o524 e Answer received: !yv Command to send: m d o528 e Answer received: !yv Command to send: m d o533 e Answer received: !yv Command to send: m d o541 e Answer received: !yv Command to send: m d o547 e Answer received: !yv Command to send: m d o554 e Answer received: !yv Command to send: m d o559 e Answer received: !yv Command to send: m d o526 e Answer received: !yv Command to send: m d o527 e Answer received: !yv Command to send: m d o529 e Answer received: !yv Command to send: m d o530 e Answer received: !yv Command to send: m d o531 e Answer received: !yv Command to send: m d o532 e Answer received: !yv Command to send: m d o534 e Answer received: !yv Command to send: m d o535 e Answer received: !yv Command to send: m d o536 e Answer received: !yv Command to send: m d o537 e Answer received: !yv Command to send: m d o538 e Answer received: !yv Command to send: m d o540 e Answer received: !yv Command to send: m d o542 e Answer received: !yv Command to send: m d o543 e Answer received: !yv Command to send: m d o544 e Answer received: !yv Command to send: m d o545 e Answer received: !yv Command to send: m d o546 e Answer received: !yv Command to send: m d o548 e Answer received: !yv Command to send: m d o549 e Answer received: !yv Command to send: m d o550 e Answer received: !yv Command to send: m d o551 e Answer received: !yv Command to send: m d o552 e Answer received: !yv Command to send: m d o553 e Answer received: !yv Command to send: m d o555 e Answer received: !yv Command to send: m d o556 e Answer received: !yv Command to send: m d o557 e Answer received: !yv Command to send: m d o558 e Answer received: !yv Command to send: m d o560 e Answer received: !yv Command to send: m d o561 e Answer received: !yv Command to send: m d o564 e Answer received: !yv Command to send: m d o565 e Answer received: !yv Command to send: m d o568 e Answer received: !yv Command to send: m d o573 e Answer received: !yv Command to send: m d o581 e Answer received: !yv Command to send: m d o587 e Answer received: !yv Command to send: m d o594 e Answer received: !yv Command to send: m d o599 e Answer received: !yv Command to send: m d o566 e Answer received: !yv Command to send: m d o567 e Answer received: !yv Command to send: m d o569 e Answer received: !yv Command to send: m d o570 e Answer received: !yv Command to send: m d o571 e Answer received: !yv Command to send: m d o572 e Answer received: !yv Command to send: m d o574 e Answer received: !yv Command to send: m d o575 e Answer received: !yv Command to send: m d o576 e Answer received: !yv Command to send: m d o577 e Answer received: !yv Command to send: m d o578 e Answer received: !yv Command to send: m d o580 e Answer received: !yv Command to send: m d o582 e Answer received: !yv Command to send: m d o583 e Answer received: !yv Command to send: m d o584 e Answer received: !yv Command to send: m d o585 e Answer received: !yv Command to send: m d o586 e Answer received: !yv Command to send: m d o588 e Answer received: !yv Command to send: m d o589 e Answer received: !yv Command to send: m d o590 e Answer received: !yv Command to send: m d o591 e Answer received: !yv Command to send: m d o592 e Answer received: !yv Command to send: m d o593 e Answer received: !yv Command to send: m d o595 e Answer received: !yv Command to send: m d o596 e Answer received: !yv Command to send: m d o597 e Answer received: !yv Command to send: m d o598 e Answer received: !yv Command to send: m d o600 e Answer received: !yv Command to send: m d o601 e Answer received: !yv Command to send: m d o602 e Answer received: !yv Command to send: m d o603 e Answer received: !yv Command to send: m d o606 e Answer received: !yv Command to send: m d o611 e Answer received: !yv Command to send: m d o619 e Answer received: !yv Command to send: m d o625 e Answer received: !yv Command to send: m d o632 e Answer received: !yv Command to send: m d o637 e Answer received: !yv Command to send: m d o604 e Answer received: !yv Command to send: m d o605 e Answer received: !yv Command to send: m d o607 e Answer received: !yv Command to send: m d o608 e Answer received: !yv Command to send: m d o609 e Answer received: !yv Command to send: m d o610 e Answer received: !yv Command to send: m d o612 e Answer received: !yv Command to send: m d o613 e Answer received: !yv Command to send: m d o614 e Answer received: !yv Command to send: m d o615 e Answer received: !yv Command to send: m d o616 e Answer received: !yv Command to send: m d o618 e Answer received: !yv Command to send: m d o620 e Answer received: !yv Command to send: m d o621 e Answer received: !yv Command to send: m d o622 e Answer received: !yv Command to send: m d o623 e Answer received: !yv Command to send: m d o624 e Answer received: !yv Command to send: m d o626 e Answer received: !yv Command to send: m d o627 e Answer received: !yv Command to send: m d o628 e Answer received: !yv Command to send: m d o629 e Answer received: !yv Command to send: m d o630 e Answer received: !yv Command to send: m d o631 e Answer received: !yv Command to send: m d o633 e Answer received: !yv Command to send: m d o634 e Answer received: !yv Command to send: m d o635 e Answer received: !yv Command to send: m d o636 e Answer received: !yv Command to send: m d o638 e Answer received: !yv Command to send: m d o639 e Answer received: !yv Command to send: m d o640 e Answer received: !yv Command to send: m d o641 e Answer received: !yv Executing query SELECT * FROM icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') on node1 Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster='cluster_simple' on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312; CREATE TABLE test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster = 'cluster_simple' on node1 Executing query SELECT * FROM test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 on node1 Executing query SELECT * FROM remote('node2',icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/')) on node1 Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312` SYNC on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312; CREATE TABLE test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') on node1 Executing query SELECT * FROM test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 on node1 Executing query SELECT * FROM test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 SETTINGS object_storage_cluster='cluster_simple' on node1 ------------------------------ Captured log call ------------------------------- 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro526 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o526 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro527 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo528 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o528 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro528 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro529 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o527 toDF ro529 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro530 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro531 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o531 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro532 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo533 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o533 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro533 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro534 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o532 toDF ro534 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro535 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o535 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro536 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro537 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o537 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro538 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o538 get e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro539 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro540 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yao541 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o540 applyModifiableSettings ro539 ro541 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro542 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o536 cast ro542 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro543 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o535 withColumn sb ro543 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro544 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro545 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro546 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo547 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o547 add ro546 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro547 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro548 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro548 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro549 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o545 over ro549 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro550 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o530 withColumn srow_index ro550 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro551 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro552 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro553 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo554 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o554 add ro553 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro554 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro555 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro555 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro556 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o552 over ro556 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro557 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o544 withColumn srow_index ro557 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro558 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo559 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o559 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro559 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro560 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o551 join ro558 ro560 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro561 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o561 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro562 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o562 writeTo stest_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro563 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o563 tableProperty sformat-version s1 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro564 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o563 using siceberg e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro565 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o563 create e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro566 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o566 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro567 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo568 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o568 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro568 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro569 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o567 toDF ro569 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro570 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro571 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o571 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro572 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo573 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o573 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro573 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro574 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o572 toDF ro574 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro575 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o575 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro576 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro577 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o577 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro578 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o578 get e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro579 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro580 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yao581 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o580 applyModifiableSettings ro579 ro581 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro582 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o576 cast ro582 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro583 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o575 withColumn sb ro583 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro584 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro585 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro586 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo587 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o587 add ro586 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro587 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro588 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro588 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro589 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o585 over ro589 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro590 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o570 withColumn srow_index ro590 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro591 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro592 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro593 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo594 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o594 add ro593 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro594 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro595 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro595 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro596 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o592 over ro596 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro597 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o584 withColumn srow_index ro597 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro598 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo599 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o599 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro599 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro600 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o591 join ro598 ro600 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro601 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o601 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro602 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o602 writeTo stest_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro603 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o603 append e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro604 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o604 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro605 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo606 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o606 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro606 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro607 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o605 toDF ro607 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro608 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro609 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o609 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro610 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo611 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o611 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro611 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro612 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o610 toDF ro612 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro613 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o613 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro614 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro615 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o615 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro616 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o616 get e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro617 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro618 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yao619 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o618 applyModifiableSettings ro617 ro619 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro620 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o614 cast ro620 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro621 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o613 withColumn sb ro621 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro622 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro623 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro624 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo625 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o625 add ro624 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro625 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro626 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro626 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro627 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o623 over ro627 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro628 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o608 withColumn srow_index ro628 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro629 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro630 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro631 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo632 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o632 add ro631 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro632 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro633 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro633 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro634 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o630 over ro634 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro635 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o622 withColumn srow_index ro635 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro636 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ylo637 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o637 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro637 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro638 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o629 join ro636 ro638 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro639 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o639 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro640 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o640 writeTo stest_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yro641 (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Command to send: c o641 append e (clientserver.py:501, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-29-4bafdff8-a4e7-4e8b-a900-65d629d30194-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7265914511372071741-1-9fb5070b-1811-4783-8530-ce060847cac4.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9fb5070b-1811-4783-8530-ce060847cac4-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v3.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:09 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-29-4bafdff8-a4e7-4e8b-a900-65d629d30194-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7265914511372071741-1-9fb5070b-1811-4783-8530-ce060847cac4.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9fb5070b-1811-4783-8530-ce060847cac4-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v3.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:09 [ 670 ] INFO : Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-29-4bafdff8-a4e7-4e8b-a900-65d629d30194-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-27-8d8b7d35-ae80-4d85-9f37-6cbd20ca4bc8-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/data/00000-25-5b65139d-4664-4dd7-bcd6-81d16a5b6892-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-3044169209458028690-1-6ad7753b-326f-456a-92ed-2a7f3aa8f9ec.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7172275750806074595-1-9da7cb9f-af1d-4a02-ab90-1ac6f0aa224c.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/snap-7265914511372071741-1-9fb5070b-1811-4783-8530-ce060847cac4.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/6ad7753b-326f-456a-92ed-2a7f3aa8f9ec-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/9fb5070b-1811-4783-8530-ce060847cac4-m0.avro', '/iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/metadata/v3.metadata.json'] (test.py:653, test_cluster_table_function) 2025-04-04 18:15:09 [ 670 ] DEBUG : Executing query SELECT * FROM system.clusters on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] INFO : Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N (test.py:657, test_cluster_table_function) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o525 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o447 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o446 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o423 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o475 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o482 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o463 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o524 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o528 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o533 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o541 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o547 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o554 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o559 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o526 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o527 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o529 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o530 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o531 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o532 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o534 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o535 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o536 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o537 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o538 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o540 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o542 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o543 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o544 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o545 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o546 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o548 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o549 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o550 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o551 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o552 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o553 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o555 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o556 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o557 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o558 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o560 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o561 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o564 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o565 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o568 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o573 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o581 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o587 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o594 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o599 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o566 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o567 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o569 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o570 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o571 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o572 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o574 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o575 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o576 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o577 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o578 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o580 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o582 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o583 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o584 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o585 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o586 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o588 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o589 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o590 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o591 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o592 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o593 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o595 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o596 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o597 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o598 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o600 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o601 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o602 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o603 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o606 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o611 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o619 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o625 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o632 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o637 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o604 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o605 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o607 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o608 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o609 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o610 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o612 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o613 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o614 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o615 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o616 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o618 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o620 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o621 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o622 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o623 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o624 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o626 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o627 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o628 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o629 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o630 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o631 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o633 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o634 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o635 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o636 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o638 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o639 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o640 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: m d o641 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query SELECT * FROM icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312; CREATE TABLE test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster = 'cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query SELECT * FROM remote('node2',icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/')) on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312` SYNC on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312; CREATE TABLE test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312/', format=Parquet, url = 'http://minio1:9001/root/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 on node1 (cluster.py:3677, query) 2025-04-04 18:15:10 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_1_s3_db530cfe_dc89_4211_9d6d_7bafc257b312 SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) ______________________ test_cluster_table_function[s3-2] _______________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = format_version = '2', storage_type = 's3' @pytest.mark.parametrize("format_version", ["1", "2"]) @pytest.mark.parametrize("storage_type", ["s3", "azure", "hdfs"]) def test_cluster_table_function(started_cluster, format_version, storage_type): if is_arm() and storage_type == "hdfs": pytest.skip("Disabled test IcebergHDFS for aarch64") instance = started_cluster.instances["node1"] spark = started_cluster.spark_session TABLE_NAME = ( "test_iceberg_cluster_" + format_version + "_" + storage_type + "_" + get_uuid_str() ) def add_df(mode): write_iceberg_from_df( spark, generate_data(spark, 0, 100), TABLE_NAME, mode=mode, format_version=format_version, ) files = default_upload_directory( started_cluster, storage_type, f"/iceberg_data/default/{TABLE_NAME}/", f"/iceberg_data/default/{TABLE_NAME}/", ) logging.info(f"Adding another dataframe. result files: {files}") return files files = add_df(mode="overwrite") for i in range(1, len(started_cluster.instances)): files = add_df(mode="append") logging.info(f"Setup complete. files: {files}") assert len(files) == 5 + 4 * (len(started_cluster.instances) - 1) clusters = instance.query(f"SELECT * FROM system.clusters") logging.info(f"Clusters setup: {clusters}") # Regular Query only node1 table_function_expr = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True ) select_regular = ( instance.query(f"SELECT * FROM {table_function_expr}").strip().split() ) # Cluster Query with node1 as coordinator table_function_expr_cluster = get_creation_expression( storage_type, TABLE_NAME, started_cluster, table_function=True, run_on_cluster=True, ) query_id_cluster = str(uuid.uuid4()) select_cluster = ( instance.query( f"SELECT * FROM {table_function_expr_cluster}", query_id=query_id_cluster ) .strip() .split() ) # Cluster Query with node1 as coordinator with alternative syntax query_id_cluster_alt_syntax = str(uuid.uuid4()) select_cluster_alt_syntax = ( instance.query( f""" SELECT * FROM {table_function_expr} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_cluster_alt_syntax, ) .strip() .split() ) create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster, object_storage_cluster='cluster_simple') query_id_cluster_table_engine = str(uuid.uuid4()) select_cluster_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_cluster_table_engine, ) .strip() .split() ) select_remote_cluster = ( instance.query(f"SELECT * FROM remote('node2',{table_function_expr_cluster})") .strip() .split() ) instance.query(f"DROP TABLE IF EXISTS `{TABLE_NAME}` SYNC") create_iceberg_table(storage_type, instance, TABLE_NAME, started_cluster) query_id_pure_table_engine = str(uuid.uuid4()) select_pure_table_engine = ( instance.query( f""" SELECT * FROM {TABLE_NAME} """, query_id=query_id_pure_table_engine, ) .strip() .split() ) query_id_pure_table_engine_cluster = str(uuid.uuid4()) select_pure_table_engine_cluster = ( instance.query( f""" SELECT * FROM {TABLE_NAME} SETTINGS object_storage_cluster='cluster_simple' """, query_id=query_id_pure_table_engine_cluster, ) .strip() .split() ) # Simple size check assert len(select_regular) == 600 assert len(select_cluster) == 600 assert len(select_cluster_alt_syntax) == 600 > assert len(select_cluster_table_engine) == 600 E AssertionError: assert 1800 == 600 E + where 1800 = len(['0', '1', '1', '2', '2', '3', ...]) test_storage_iceberg/test.py:747: AssertionError ----------------------------- Captured stdout call ----------------------------- 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} Upload to bucket: None {} Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} Upload to bucket: None Upload to bucket: None {} Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 25/04/04 18:15:11 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. {} {} {} Upload to bucket: None Upload to bucket: None Upload to bucket: None {} Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None Upload to bucket: None ----------------------------- Captured stderr call ----------------------------- Command to send: c o50 sc e Answer received: !yro642 Command to send: c o642 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro643 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo644 Command to send: c o644 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro644 e Answer received: !yro645 Command to send: c o643 toDF ro645 e Answer received: !yro646 Command to send: c o50 sc e Answer received: !yro647 Command to send: c o647 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro648 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo649 Command to send: c o649 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro649 e Answer received: !yro650 Command to send: c o648 toDF ro650 e Answer received: !yro651 Command to send: c o651 apply sb e Answer received: !yro652 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro653 Command to send: c o653 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro654 Command to send: c o654 get e Answer received: !yro655 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro656 Command to send: i java.util.HashMap e Answer received: !yao657 Command to send: c o656 applyModifiableSettings ro655 ro657 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro658 Command to send: c o652 cast ro658 e Answer received: !yro659 Command to send: c o651 withColumn sb ro659 e Answer received: !yro660 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro661 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro662 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo663 Command to send: c o663 add ro662 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro663 e Answer received: !yro664 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro664 e Answer received: !yro665 Command to send: c o661 over ro665 e Answer received: !yro666 Command to send: c o646 withColumn srow_index ro666 e Answer received: !yro667 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro668 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro669 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo670 Command to send: c o670 add ro669 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro670 e Answer received: !yro671 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro671 e Answer received: !yro672 Command to send: c o668 over ro672 e Answer received: !yro673 Command to send: c o660 withColumn srow_index ro673 e Answer received: !yro674 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo675 Command to send: c o675 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro675 e Answer received: !yro676 Command to send: c o667 join ro674 ro676 sinner e Answer received: !yro677 Command to send: c o677 drop srow_index e Answer received: !yro678 Command to send: c o678 writeTo stest_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 e Answer received: !yro679 Command to send: c o679 tableProperty sformat-version s2 e Answer received: !yro680 Command to send: c o679 using siceberg e Answer received: !yro681 Command to send: c o679 create e Command to send: m d o563 e Answer received: !yv Command to send: m d o562 e Answer received: !yv Command to send: m d o539 e Answer received: !yv Command to send: m d o579 e Answer received: !yv Command to send: m d o644 e Answer received: !yv Command to send: m d o649 e Answer received: !yv Command to send: m d o657 e Answer received: !yv Command to send: m d o663 e Answer received: !yv Command to send: m d o670 e Answer received: !yv Command to send: m d o675 e Answer received: !yv Command to send: m d o642 e Answer received: !yv Command to send: m d o643 e Answer received: !yv Command to send: m d o645 e Answer received: !yv Command to send: m d o646 e Answer received: !yv Command to send: m d o647 e Answer received: !yv Command to send: m d o648 e Answer received: !yv Command to send: m d o650 e Answer received: !yv Command to send: m d o651 e Answer received: !yv Command to send: m d o652 e Answer received: !yv Command to send: m d o653 e Answer received: !yv Command to send: m d o654 e Answer received: !yv Command to send: m d o656 e Answer received: !yv Command to send: m d o658 e Answer received: !yv Command to send: m d o659 e Answer received: !yv Command to send: m d o660 e Answer received: !yv Command to send: m d o661 e Answer received: !yv Command to send: m d o662 e Answer received: !yv Command to send: m d o664 e Answer received: !yv Command to send: m d o665 e Answer received: !yv Command to send: m d o666 e Answer received: !yv Command to send: m d o667 e Answer received: !yv Command to send: m d o668 e Answer received: !yv Command to send: m d o669 e Answer received: !yv Command to send: m d o671 e Answer received: !yv Command to send: m d o672 e Answer received: !yv Command to send: m d o673 e Answer received: !yv Command to send: m d o674 e Answer received: !yv Command to send: m d o676 e Answer received: !yv Command to send: m d o677 e Answer received: !yv Command to send: m d o680 e Answer received: !yv Command to send: m d o681 e Answer received: !yv Answer received: !yv http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json HTTP/1.1" 200 0 Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro682 Command to send: c o682 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro683 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo684 Command to send: c o684 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro684 e Answer received: !yro685 Command to send: c o683 toDF ro685 e Answer received: !yro686 Command to send: c o50 sc e Answer received: !yro687 Command to send: c o687 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro688 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo689 Command to send: c o689 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro689 e Answer received: !yro690 Command to send: c o688 toDF ro690 e Answer received: !yro691 Command to send: c o691 apply sb e Answer received: !yro692 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro693 Command to send: c o693 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro694 Command to send: c o694 get e Answer received: !yro695 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro696 Command to send: i java.util.HashMap e Answer received: !yao697 Command to send: c o696 applyModifiableSettings ro695 ro697 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro698 Command to send: c o692 cast ro698 e Answer received: !yro699 Command to send: c o691 withColumn sb ro699 e Answer received: !yro700 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro701 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro702 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo703 Command to send: c o703 add ro702 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro703 e Answer received: !yro704 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro704 e Answer received: !yro705 Command to send: c o701 over ro705 e Answer received: !yro706 Command to send: c o686 withColumn srow_index ro706 e Answer received: !yro707 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro708 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro709 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo710 Command to send: c o710 add ro709 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro710 e Answer received: !yro711 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro711 e Answer received: !yro712 Command to send: c o708 over ro712 e Answer received: !yro713 Command to send: c o700 withColumn srow_index ro713 e Answer received: !yro714 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo715 Command to send: c o715 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro715 e Answer received: !yro716 Command to send: c o707 join ro714 ro716 sinner e Answer received: !yro717 Command to send: c o717 drop srow_index e Answer received: !yro718 Command to send: c o718 writeTo stest_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 e Answer received: !yro719 Command to send: c o719 append e Answer received: !yv http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json HTTP/1.1" 200 0 Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json'] Command to send: c o50 sc e Answer received: !yro720 Command to send: c o720 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i0 i100 i1 i1 e Answer received: !yro721 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo722 Command to send: c o722 add sa e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro722 e Answer received: !yro723 Command to send: c o721 toDF ro723 e Answer received: !yro724 Command to send: c o50 sc e Answer received: !yro725 Command to send: c o725 defaultParallelism e Answer received: !yi1 Command to send: c o61 range i1 i101 i1 i1 e Answer received: !yro726 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo727 Command to send: c o727 add sb e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro727 e Answer received: !yro728 Command to send: c o726 toDF ro728 e Answer received: !yro729 Command to send: c o729 apply sb e Answer received: !yro730 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro731 Command to send: c o731 isDefined e Answer received: !ybtrue Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e Answer received: !yro732 Command to send: c o732 get e Answer received: !yro733 Command to send: r u SparkSession$ rj e Answer received: !ycorg.apache.spark.sql.SparkSession$ Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e Answer received: !yro734 Command to send: i java.util.HashMap e Answer received: !yao735 Command to send: c o734 applyModifiableSettings ro733 ro735 e Answer received: !yv Command to send: c o61 parseDataType s"string" e Answer received: !yro736 Command to send: c o730 cast ro736 e Answer received: !yro737 Command to send: c o729 withColumn sb ro737 e Answer received: !yro738 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro739 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro740 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo741 Command to send: c o741 add ro740 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro741 e Answer received: !yro742 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro742 e Answer received: !yro743 Command to send: c o739 over ro743 e Answer received: !yro744 Command to send: c o724 withColumn srow_index ro744 e Answer received: !yro745 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions row_number e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions row_number e Answer received: !yro746 Command to send: r u functions rj e Answer received: !ycorg.apache.spark.sql.functions Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !ym Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e Answer received: !yro747 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.sql rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions rj e Answer received: !yp Command to send: r u org.apache.spark.sql.expressions.Window rj e Answer received: !ycorg.apache.spark.sql.expressions.Window Command to send: r m org.apache.spark.sql.expressions.Window orderBy e Answer received: !ym Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo748 Command to send: c o748 add ro747 e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro748 e Answer received: !yro749 Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro749 e Answer received: !yro750 Command to send: c o746 over ro750 e Answer received: !yro751 Command to send: c o738 withColumn srow_index ro751 e Answer received: !yro752 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e Answer received: !ym Command to send: i java.util.ArrayList e Answer received: !ylo753 Command to send: c o753 add srow_index e Answer received: !ybtrue Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro753 e Answer received: !yro754 Command to send: c o745 join ro752 ro754 sinner e Answer received: !yro755 Command to send: c o755 drop srow_index e Answer received: !yro756 Command to send: c o756 writeTo stest_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 e Answer received: !yro757 Command to send: c o757 append e Answer received: !yv http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-35-f81e20b5-1086-43b9-b187-2d7b4ebb3ae2-00001.parquet HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8034851105952010148-1-42300b89-d077-4d68-8fee-a8b4175466bb.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/42300b89-d077-4d68-8fee-a8b4175466bb-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json HTTP/1.1" 200 0 http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v3.metadata.json HTTP/1.1" 200 0 Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-35-f81e20b5-1086-43b9-b187-2d7b4ebb3ae2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8034851105952010148-1-42300b89-d077-4d68-8fee-a8b4175466bb.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/42300b89-d077-4d68-8fee-a8b4175466bb-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v3.metadata.json'] Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-35-f81e20b5-1086-43b9-b187-2d7b4ebb3ae2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8034851105952010148-1-42300b89-d077-4d68-8fee-a8b4175466bb.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/42300b89-d077-4d68-8fee-a8b4175466bb-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v3.metadata.json'] Executing query SELECT * FROM system.clusters on node1 Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') on node1 Executing query SELECT * FROM icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') on node1 Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster='cluster_simple' on node1 Command to send: m d o684 e Answer received: !yv Command to send: m d o689 e Answer received: !yv Command to send: m d o697 e Answer received: !yv Command to send: m d o703 e Answer received: !yv Command to send: m d o710 e Answer received: !yv Command to send: m d o17 e Answer received: !yv Command to send: m d o16 e Answer received: !yv Command to send: m d o25 e Answer received: !yv Command to send: m d o14 e Answer received: !yv Command to send: m d o153 e Answer received: !yv Command to send: m d o269 e Answer received: !yv Command to send: m d o385 e Answer received: !yv Command to send: m d o501 e Answer received: !yv Command to send: m d o617 e Answer received: !yv Command to send: m d o679 e Answer received: !yv Command to send: m d o678 e Answer received: !yv Command to send: m d o655 e Answer received: !yv Command to send: m d o682 e Answer received: !yv Command to send: m d o683 e Answer received: !yv Command to send: m d o685 e Answer received: !yv Command to send: m d o686 e Answer received: !yv Command to send: m d o687 e Answer received: !yv Command to send: m d o688 e Answer received: !yv Command to send: m d o690 e Answer received: !yv Command to send: m d o691 e Answer received: !yv Command to send: m d o692 e Answer received: !yv Command to send: m d o693 e Answer received: !yv Command to send: m d o694 e Answer received: !yv Command to send: m d o696 e Answer received: !yv Command to send: m d o698 e Answer received: !yv Command to send: m d o699 e Answer received: !yv Command to send: m d o701 e Answer received: !yv Command to send: m d o702 e Answer received: !yv Command to send: m d o704 e Answer received: !yv Command to send: m d o705 e Answer received: !yv Command to send: m d o706 e Answer received: !yv Command to send: m d o709 e Answer received: !yv Command to send: m d o711 e Answer received: !yv Command to send: m d o715 e Answer received: !yv Command to send: m d o722 e Answer received: !yv Command to send: m d o727 e Answer received: !yv Command to send: m d o735 e Answer received: !yv Command to send: m d o741 e Answer received: !yv Command to send: m d o714 e Answer received: !yv Command to send: m d o716 e Answer received: !yv Command to send: m d o717 e Answer received: !yv Command to send: m d o718 e Answer received: !yv Command to send: m d o719 e Answer received: !yv Command to send: m d o720 e Answer received: !yv Command to send: m d o721 e Answer received: !yv Command to send: m d o723 e Answer received: !yv Command to send: m d o724 e Answer received: !yv Command to send: m d o725 e Answer received: !yv Command to send: m d o726 e Answer received: !yv Command to send: m d o728 e Answer received: !yv Command to send: m d o729 e Answer received: !yv Command to send: m d o730 e Answer received: !yv Command to send: m d o731 e Answer received: !yv Command to send: m d o732 e Answer received: !yv Command to send: m d o734 e Answer received: !yv Command to send: m d o736 e Answer received: !yv Command to send: m d o737 e Answer received: !yv Command to send: m d o739 e Answer received: !yv Command to send: m d o740 e Answer received: !yv Command to send: m d o742 e Answer received: !yv Command to send: m d o743 e Answer received: !yv Command to send: m d o744 e Answer received: !yv Command to send: m d o748 e Answer received: !yv Command to send: m d o753 e Answer received: !yv Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5; CREATE TABLE test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster = 'cluster_simple' on node1 Executing query SELECT * FROM test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 on node1 Executing query SELECT * FROM remote('node2',icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/')) on node1 Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5` SYNC on node1 Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5; CREATE TABLE test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') on node1 Executing query SELECT * FROM test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 on node1 Executing query SELECT * FROM test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 SETTINGS object_storage_cluster='cluster_simple' on node1 ------------------------------ Captured log call ------------------------------- 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro642 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o642 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro643 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ylo644 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o644 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro644 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro645 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o643 toDF ro645 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro646 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro647 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o647 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro648 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ylo649 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o649 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro649 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro650 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o648 toDF ro650 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro651 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o651 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro652 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro653 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o653 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro654 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o654 get e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro655 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro656 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yao657 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o656 applyModifiableSettings ro655 ro657 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro658 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o652 cast ro658 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro659 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o651 withColumn sb ro659 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro660 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro661 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro662 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ylo663 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o663 add ro662 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro663 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro664 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro664 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro665 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o661 over ro665 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro666 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o646 withColumn srow_index ro666 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro667 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro668 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro669 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ylo670 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o670 add ro669 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro670 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro671 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro671 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro672 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o668 over ro672 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro673 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o660 withColumn srow_index ro673 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro674 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ylo675 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o675 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro675 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro676 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o667 join ro674 ro676 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro677 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o677 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro678 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o678 writeTo stest_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro679 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o679 tableProperty sformat-version s2 e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro680 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o679 using siceberg e (clientserver.py:501, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Answer received: !yro681 (clientserver.py:512, send_command) 2025-04-04 18:15:10 [ 670 ] DEBUG : Command to send: c o679 create e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o563 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o562 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o539 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o579 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o644 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o649 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o657 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o663 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o670 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o675 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o642 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o643 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o645 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o646 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o647 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o648 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o650 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o651 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o652 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o653 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o654 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o656 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o658 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o659 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o660 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o661 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o662 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o664 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o665 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o666 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o667 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o668 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o669 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o671 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o672 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o673 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o674 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o676 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o677 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o680 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: m d o681 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro682 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o682 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro683 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo684 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o684 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro684 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro685 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o683 toDF ro685 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro686 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro687 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o687 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro688 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo689 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o689 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro689 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro690 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o688 toDF ro690 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro691 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o691 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro692 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro693 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o693 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro694 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o694 get e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro695 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro696 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yao697 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o696 applyModifiableSettings ro695 ro697 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro698 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o692 cast ro698 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro699 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o691 withColumn sb ro699 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro700 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro701 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro702 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo703 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o703 add ro702 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro703 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro704 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro704 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro705 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o701 over ro705 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro706 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o686 withColumn srow_index ro706 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro707 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro708 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro709 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo710 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o710 add ro709 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro710 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro711 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro711 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro712 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o708 over ro712 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro713 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o700 withColumn srow_index ro713 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro714 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo715 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o715 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro715 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro716 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o707 join ro714 ro716 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro717 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o717 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro718 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o718 writeTo stest_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro719 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o719 append e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro720 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o720 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o61 range i0 i100 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro721 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo722 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o722 add sa e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro722 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro723 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o721 toDF ro723 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro724 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o50 sc e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro725 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o725 defaultParallelism e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yi1 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o61 range i1 i101 i1 i1 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro726 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo727 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o727 add sb e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro727 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro728 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o726 toDF ro728 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro729 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o729 apply sb e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro730 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro731 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o731 isDefined e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u SparkSession rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.SparkSession getActiveSession e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro732 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o732 get e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro733 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u SparkSession$ rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.SparkSession$ (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.SparkSession$ MODULE$ e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro734 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.HashMap e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yao735 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o734 applyModifiableSettings ro733 ro735 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o61 parseDataType s"string" e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro736 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o730 cast ro736 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro737 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o729 withColumn sb ro737 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro738 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro739 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro740 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo741 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o741 add ro740 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro741 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro742 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro742 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro743 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o739 over ro743 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro744 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o724 withColumn srow_index ro744 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro745 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions row_number e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro746 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u functions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.functions (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.functions monotonically_increasing_id e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro747 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yp (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u org.apache.spark.sql.expressions.Window rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.sql.expressions.Window (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.sql.expressions.Window orderBy e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo748 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o748 add ro747 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro748 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro749 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.sql.expressions.Window orderBy ro749 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro750 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o746 over ro750 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro751 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o738 withColumn srow_index ro751 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro752 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r u PythonUtils rj e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ycorg.apache.spark.api.python.PythonUtils (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: r m org.apache.spark.api.python.PythonUtils toSeq e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ym (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: i java.util.ArrayList e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ylo753 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o753 add srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !ybtrue (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c z:org.apache.spark.api.python.PythonUtils toSeq ro753 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro754 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o745 join ro752 ro754 sinner e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro755 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o755 drop srow_index e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro756 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o756 writeTo stest_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yro757 (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Command to send: c o757 append e (clientserver.py:501, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-35-f81e20b5-1086-43b9-b187-2d7b4ebb3ae2-00001.parquet HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8034851105952010148-1-42300b89-d077-4d68-8fee-a8b4175466bb.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/42300b89-d077-4d68-8fee-a8b4175466bb-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] DEBUG : http://172.16.2.5:9001 "PUT /root//iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v3.metadata.json HTTP/1.1" 200 0 (connectionpool.py:547, _make_request) 2025-04-04 18:15:11 [ 670 ] INFO : Adding another dataframe. result files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-35-f81e20b5-1086-43b9-b187-2d7b4ebb3ae2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8034851105952010148-1-42300b89-d077-4d68-8fee-a8b4175466bb.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/42300b89-d077-4d68-8fee-a8b4175466bb-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v3.metadata.json'] (test.py:645, add_df) 2025-04-04 18:15:11 [ 670 ] INFO : Setup complete. files: ['/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-31-2a4d0936-ad58-4e2a-b8cf-375a05e858d6-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-33-c48960ec-f768-4041-9afb-2d282c1320cc-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/data/00000-35-f81e20b5-1086-43b9-b187-2d7b4ebb3ae2-00001.parquet', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v2.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8138613628709705005-1-0d578a4b-b355-4c7b-93b4-6be814b84808.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-8034851105952010148-1-42300b89-d077-4d68-8fee-a8b4175466bb.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/version-hint.text', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/42300b89-d077-4d68-8fee-a8b4175466bb-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/0d578a4b-b355-4c7b-93b4-6be814b84808-m0.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/snap-367251160246727166-1-0ff21ff2-56e3-4788-a1a1-b7ec3182e4a7.avro', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v1.metadata.json', '/iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/metadata/v3.metadata.json'] (test.py:653, test_cluster_table_function) 2025-04-04 18:15:11 [ 670 ] DEBUG : Executing query SELECT * FROM system.clusters on node1 (cluster.py:3677, query) 2025-04-04 18:15:11 [ 670 ] INFO : Clusters setup: cluster_simple 1 1 0 1 node1 172.16.2.10 9000 1 default 0 0 0 \N \N \N cluster_simple 1 1 0 2 node2 172.16.2.8 9000 0 default 0 0 0 \N \N \N cluster_simple 1 1 0 3 node3 172.16.2.9 9000 0 default 0 0 0 \N \N \N (test.py:657, test_cluster_table_function) 2025-04-04 18:15:11 [ 670 ] DEBUG : Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:11 [ 670 ] DEBUG : Executing query SELECT * FROM icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query SELECT * FROM icebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o684 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o689 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o697 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o703 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o710 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o17 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o16 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o25 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o14 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o153 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o269 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o385 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o501 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o617 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o679 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o678 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o655 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o682 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o683 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o685 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o686 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o687 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o688 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o690 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o691 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o692 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o693 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o694 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o696 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o698 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o699 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o701 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o702 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o704 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o705 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o706 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o709 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o711 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o715 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o722 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o727 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o735 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o741 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o714 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o716 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o717 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o718 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o719 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o720 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o721 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o723 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o724 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o725 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o726 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o728 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o729 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o730 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o731 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o732 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o734 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o736 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o737 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o739 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o740 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o742 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o743 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o744 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o748 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Command to send: m d o753 e (clientserver.py:501, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Answer received: !yv (clientserver.py:512, send_command) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5; CREATE TABLE test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') SETTINGS object_storage_cluster = 'cluster_simple' on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query SELECT * FROM remote('node2',icebergS3Cluster('cluster_simple', s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/')) on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS `test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5` SYNC on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query DROP TABLE IF EXISTS test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5; CREATE TABLE test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 ENGINE=IcebergS3(s3, filename = 'iceberg_data/default/test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5/', format=Parquet, url = 'http://minio1:9001/root/') on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 on node1 (cluster.py:3677, query) 2025-04-04 18:15:12 [ 670 ] DEBUG : Executing query SELECT * FROM test_iceberg_cluster_2_s3_967216f3_b807_466d_8bf0_3669989a05e5 SETTINGS object_storage_cluster='cluster_simple' on node1 (cluster.py:3677, query) ============================== slowest durations =============================== 226.54s setup test_storage_iceberg/test.py::test_cluster_table_function[azure-1] 169.74s setup test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection 169.39s setup test_storage_azure_blob_storage/test_check_after_upload.py::test_simple 94.36s call test_storage_hudi/test.py::test_multiple_hudi_files 42.40s call test_storage_iceberg/test.py::test_not_evolved_schema[local-1] 42.37s call test_storage_iceberg/test.py::test_not_evolved_schema[local-2] 37.57s call test_storage_iceberg/test.py::test_metadata_file_selection[local-2] 37.22s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2] 36.27s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2] 35.81s call test_storage_iceberg/test.py::test_metadata_file_selection[local-1] 35.77s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1] 35.44s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1] 30.82s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1] 29.97s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2] 26.05s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1] 23.06s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2] 21.94s teardown test_storage_iceberg/test.py::test_row_based_deletes[hdfs] 21.82s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2] 21.77s teardown test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards 21.67s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1] 21.46s teardown test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3] 21.38s teardown test_storage_hudi/test.py::test_types 20.56s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1] 19.99s setup test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3] 19.66s teardown test_storage_azure_blob_storage/test_check_after_upload.py::test_simple 17.01s call test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1] 16.45s call test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2] 15.82s call test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1] 15.22s call test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2] 14.47s setup test_storage_hudi/test.py::test_multiple_hudi_files 14.08s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2] 11.95s setup test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side 11.70s setup test_ssh_keys_authentication/test.py::test_ecdsa 9.59s call test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3] 8.80s call test_storage_iceberg/test.py::test_restart_broken_s3 8.12s call test_storage_iceberg/test.py::test_cluster_table_function[azure-1] 6.99s call test_storage_iceberg/test.py::test_delete_files[local-1] 6.94s call test_storage_iceberg/test.py::test_delete_files[local-2] 6.15s call test_storage_iceberg/test.py::test_not_evolved_schema[azure-2] 5.98s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1] 5.74s call test_storage_iceberg/test.py::test_not_evolved_schema[azure-1] 5.71s teardown test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side 5.65s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1] 5.51s call test_storage_iceberg/test.py::test_metadata_file_selection[azure-1] 5.46s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2] 5.44s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2] 5.41s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2] 5.34s call test_storage_iceberg/test.py::test_metadata_file_selection[azure-2] 5.09s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1] 4.92s call test_storage_iceberg/test.py::test_not_evolved_schema[s3-1] 4.78s call test_storage_iceberg/test.py::test_metadata_file_selection[s3-1] 4.70s call test_storage_iceberg/test.py::test_metadata_file_selection[s3-2] 4.62s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1] 4.60s call test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2] 4.60s teardown test_ssh_keys_authentication/test.py::test_wrong_key 4.42s call test_storage_iceberg/test.py::test_evolved_schema_complex[local-1] 4.32s call test_storage_iceberg/test.py::test_evolved_schema_complex[local-2] 4.26s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2] 4.01s call test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1] 3.84s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2] 3.75s call test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1] 3.65s call test_storage_iceberg/test.py::test_not_evolved_schema[s3-2] 3.58s call test_storage_iceberg/test.py::test_delete_files[azure-1] 3.47s call test_storage_iceberg/test.py::test_delete_files[hdfs-1] 3.45s call test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1] 3.34s call test_storage_hudi/test.py::test_types 3.22s call test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2] 3.14s call test_storage_iceberg/test.py::test_partition_by[local-2] 2.95s call test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1] 2.93s call test_storage_iceberg/test.py::test_delete_files[hdfs-2] 2.92s call test_storage_hudi/test.py::test_single_hudi_file 2.75s call test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2] 2.69s call test_storage_iceberg/test.py::test_partition_by[local-1] 2.54s call test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1] 2.45s call test_storage_iceberg/test.py::test_delete_files[azure-2] 2.44s call test_storage_iceberg/test.py::test_partition_by[hdfs-2] 2.37s call test_storage_iceberg/test.py::test_cluster_table_function[azure-2] 2.08s call test_storage_iceberg/test.py::test_delete_files[s3-2] 1.94s call test_storage_iceberg/test.py::test_delete_files[s3-1] 1.88s call test_storage_iceberg/test.py::test_row_based_deletes[hdfs] 1.84s call test_storage_iceberg/test.py::test_cluster_table_function[s3-2] 1.83s call test_storage_iceberg/test.py::test_cluster_table_function[s3-1] 1.76s call test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2] 1.75s call test_storage_iceberg/test.py::test_row_based_deletes[azure] 1.72s call test_storage_iceberg/test.py::test_partition_by[hdfs-1] 1.30s call test_storage_azure_blob_storage/test_cluster.py::test_format_detection 1.14s call test_storage_iceberg/test.py::test_filesystem_cache[s3] 1.10s call test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1] 1.05s call test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2] 1.01s call test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1] 1.00s call test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2] 0.84s call test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2] 0.77s call test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1] 0.73s call test_storage_iceberg/test.py::test_partition_by[azure-2] 0.70s call test_storage_iceberg/test.py::test_partition_by[azure-1] 0.57s call test_storage_azure_blob_storage/test_cluster.py::test_select_all 0.57s call test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1] 0.56s call test_storage_iceberg/test.py::test_partition_by[s3-2] 0.54s call test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2] 0.48s call test_storage_iceberg/test.py::test_partition_by[s3-1] 0.48s call test_storage_azure_blob_storage/test_check_after_upload.py::test_simple 0.34s call test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side 0.30s call test_storage_azure_blob_storage/test_cluster.py::test_union_all 0.27s call test_storage_azure_blob_storage/test_cluster.py::test_count 0.27s call test_ssh_keys_authentication/test.py::test_key_with_passphrase 0.27s call test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase 0.24s call test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection 0.22s call test_ssh_keys_authentication/test.py::test_ecdsa 0.20s call test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster 0.18s call test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards 0.13s call test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards 0.07s call test_ssh_keys_authentication/test.py::test_wrong_key 0.07s call test_ssh_keys_authentication/test.py::test_rsa 0.07s call test_ssh_keys_authentication/test.py::test_ed25519 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_cluster_table_function[azure-1] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[local-1] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[azure-2] 0.00s teardown test_storage_hudi/test.py::test_multiple_hudi_files 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[local-2] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[azure-1] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[local-2] 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[local-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[local-2] 0.00s teardown test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1] 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1] 0.00s setup test_storage_iceberg/test.py::test_row_based_deletes[hdfs] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[azure-1] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[local-2] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[azure-2] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[azure-1] 0.00s setup test_storage_iceberg/test.py::test_partition_by[azure-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_partition_by[s3-1] 0.00s teardown test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2] 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1] 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2] 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[s3-1] 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[s3-1] 0.00s setup test_storage_iceberg/test.py::test_partition_by[azure-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1] 0.00s setup test_storage_iceberg/test.py::test_delete_files[s3-1] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[local-2] 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[local-1] 0.00s teardown test_storage_iceberg/test.py::test_cluster_table_function[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[local-1] 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_partition_by[hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2] 0.00s setup test_storage_iceberg/test.py::test_partition_by[local-1] 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[azure-2] 0.00s teardown test_storage_iceberg/test.py::test_restart_broken_s3 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1] 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1] 0.00s teardown test_storage_iceberg/test.py::test_cluster_table_function[azure-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_complex[local-2] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[azure-2] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[s3-1] 0.00s setup test_storage_iceberg/test.py::test_row_based_deletes[azure] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[azure-2] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[azure-1] 0.00s teardown test_storage_iceberg/test.py::test_cluster_table_function[s3-1] 0.00s teardown test_storage_iceberg/test.py::test_delete_files[azure-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_selection[s3-1] 0.00s setup test_storage_iceberg/test.py::test_delete_files[s3-2] 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1] 0.00s setup test_storage_hudi/test.py::test_single_hudi_file 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2] 0.00s teardown test_storage_iceberg/test.py::test_filesystem_cache[s3] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[s3-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_delete_files[local-1] 0.00s teardown test_storage_iceberg/test.py::test_not_evolved_schema[s3-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1] 0.00s setup test_storage_iceberg/test.py::test_restart_broken_s3 0.00s teardown test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2] 0.00s teardown test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[local-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[local-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_delete_files[hdfs-2] 0.00s setup test_storage_hudi/test.py::test_types 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[azure-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2] 0.00s setup test_storage_iceberg/test.py::test_delete_files[azure-1] 0.00s setup test_storage_iceberg/test.py::test_delete_files[azure-2] 0.00s setup test_ssh_keys_authentication/test.py::test_wrong_key 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_complex[local-1] 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2] 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2] 0.00s teardown test_storage_iceberg/test.py::test_row_based_deletes[azure] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[local-1] 0.00s teardown test_storage_iceberg/test.py::test_partition_by[s3-1] 0.00s setup test_storage_iceberg/test.py::test_partition_by[s3-2] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2] 0.00s setup test_ssh_keys_authentication/test.py::test_rsa 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_complex[local-2] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2] 0.00s setup test_storage_iceberg/test.py::test_delete_files[hdfs-1] 0.00s setup test_ssh_keys_authentication/test.py::test_key_with_passphrase 0.00s setup test_storage_iceberg/test.py::test_metadata_file_selection[azure-2] 0.00s setup test_storage_iceberg/test.py::test_partition_by[hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2] 0.00s setup test_storage_iceberg/test.py::test_cluster_table_function[s3-1] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1] 0.00s setup test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1] 0.00s setup test_storage_iceberg/test.py::test_not_evolved_schema[s3-2] 0.00s setup test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2] 0.00s setup test_storage_iceberg/test.py::test_delete_files[local-2] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2] 0.00s setup test_storage_iceberg/test.py::test_cluster_table_function[azure-2] 0.00s setup test_storage_iceberg/test.py::test_filesystem_cache[s3] 0.00s setup test_storage_iceberg/test.py::test_evolved_schema_complex[local-1] 0.00s teardown test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection 0.00s setup test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1] 0.00s setup test_storage_azure_blob_storage/test_cluster.py::test_format_detection 0.00s teardown test_storage_hudi/test.py::test_single_hudi_file 0.00s setup test_ssh_keys_authentication/test.py::test_ed25519 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1] 0.00s setup test_storage_iceberg/test.py::test_cluster_table_function[s3-2] 0.00s setup test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards 0.00s setup test_storage_iceberg/test.py::test_partition_by[local-2] 0.00s setup test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster 0.00s teardown test_ssh_keys_authentication/test.py::test_rsa 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2] 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1] 0.00s setup test_storage_azure_blob_storage/test_cluster.py::test_count 0.00s teardown test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase 0.00s teardown test_ssh_keys_authentication/test.py::test_ed25519 0.00s teardown test_ssh_keys_authentication/test.py::test_ecdsa 0.00s setup test_storage_azure_blob_storage/test_cluster.py::test_select_all 0.00s setup test_storage_azure_blob_storage/test_cluster.py::test_union_all 0.00s teardown test_storage_azure_blob_storage/test_cluster.py::test_select_all 0.00s teardown test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2] 0.00s teardown test_storage_azure_blob_storage/test_cluster.py::test_count 0.00s setup test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase 0.00s teardown test_storage_azure_blob_storage/test_cluster.py::test_format_detection 0.00s teardown test_storage_azure_blob_storage/test_cluster.py::test_union_all 0.00s teardown test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards 0.00s setup test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards 0.00s teardown test_ssh_keys_authentication/test.py::test_key_with_passphrase 0.00s teardown test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster =========================== short test summary info ============================ FAILED test_storage_azure_blob_storage/test_cluster.py::test_select_all - AssertionError: assert 1 a\n2 b == 1 a\n2 b\n1 a\n2 b\n1 a\n2 b FAILED test_storage_iceberg/test.py::test_cluster_table_function[azure-1] - A... FAILED test_storage_iceberg/test.py::test_cluster_table_function[azure-2] - A... FAILED test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1] - As... FAILED test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2] - As... FAILED test_storage_iceberg/test.py::test_cluster_table_function[s3-1] - Asse... FAILED test_storage_iceberg/test.py::test_cluster_table_function[s3-2] - Asse... PASSED test_ssh_keys_authentication/test.py::test_ecdsa PASSED test_ssh_keys_authentication/test.py::test_ed25519 PASSED test_ssh_keys_authentication/test.py::test_key_with_passphrase PASSED test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side PASSED test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase PASSED test_ssh_keys_authentication/test.py::test_rsa PASSED test_ssh_keys_authentication/test.py::test_wrong_key PASSED test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3] PASSED test_storage_hudi/test.py::test_multiple_hudi_files PASSED test_storage_hudi/test.py::test_single_hudi_file PASSED test_storage_hudi/test.py::test_types PASSED test_storage_azure_blob_storage/test_check_after_upload.py::test_simple PASSED test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection PASSED test_storage_azure_blob_storage/test_cluster.py::test_count PASSED test_storage_azure_blob_storage/test_cluster.py::test_format_detection PASSED test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster PASSED test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards PASSED test_storage_azure_blob_storage/test_cluster.py::test_union_all PASSED test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards PASSED test_storage_iceberg/test.py::test_delete_files[azure-1] PASSED test_storage_iceberg/test.py::test_delete_files[azure-2] PASSED test_storage_iceberg/test.py::test_delete_files[hdfs-1] PASSED test_storage_iceberg/test.py::test_delete_files[hdfs-2] PASSED test_storage_iceberg/test.py::test_delete_files[local-1] PASSED test_storage_iceberg/test.py::test_delete_files[local-2] PASSED test_storage_iceberg/test.py::test_delete_files[s3-1] PASSED test_storage_iceberg/test.py::test_delete_files[s3-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[local-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[local-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1] PASSED test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2] PASSED test_storage_iceberg/test.py::test_filesystem_cache[s3] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1] PASSED test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[azure-1] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[azure-2] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[local-1] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[local-2] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[s3-1] PASSED test_storage_iceberg/test.py::test_metadata_file_selection[s3-2] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1] PASSED test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[azure-1] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[azure-2] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[local-1] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[local-2] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[s3-1] PASSED test_storage_iceberg/test.py::test_not_evolved_schema[s3-2] PASSED test_storage_iceberg/test.py::test_partition_by[azure-1] PASSED test_storage_iceberg/test.py::test_partition_by[azure-2] PASSED test_storage_iceberg/test.py::test_partition_by[hdfs-1] PASSED test_storage_iceberg/test.py::test_partition_by[hdfs-2] PASSED test_storage_iceberg/test.py::test_partition_by[local-1] PASSED test_storage_iceberg/test.py::test_partition_by[local-2] PASSED test_storage_iceberg/test.py::test_partition_by[s3-1] PASSED test_storage_iceberg/test.py::test_partition_by[s3-2] PASSED test_storage_iceberg/test.py::test_restart_broken_s3 PASSED test_storage_iceberg/test.py::test_row_based_deletes[azure] PASSED test_storage_iceberg/test.py::test_row_based_deletes[hdfs] ================== 7 failed, 93 passed in 1013.14s (0:16:53) =================== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 528, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_70ne6i --privileged --dns-search='.' --memory=30709026816 --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=6712d5cc610d -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 'test_s3_zero_copy_replication/test.py::test_s3_zero_copy_with_ttl_move[tiered_copy-True-3]' test_server_keep_alive/test.py::test_max_keep_alive_requests_on_user_side test_ssh_keys_authentication/test.py::test_ecdsa test_ssh_keys_authentication/test.py::test_ed25519 test_ssh_keys_authentication/test.py::test_key_with_passphrase test_ssh_keys_authentication/test.py::test_key_with_wrong_passphrase test_ssh_keys_authentication/test.py::test_rsa test_ssh_keys_authentication/test.py::test_wrong_key test_storage_azure_blob_storage/test_check_after_upload.py::test_simple test_storage_azure_blob_storage/test_cluster.py::test_cluster_with_named_collection test_storage_azure_blob_storage/test_cluster.py::test_count test_storage_azure_blob_storage/test_cluster.py::test_format_detection test_storage_azure_blob_storage/test_cluster.py::test_partition_parallel_reading_with_cluster test_storage_azure_blob_storage/test_cluster.py::test_select_all test_storage_azure_blob_storage/test_cluster.py::test_skip_unavailable_shards test_storage_azure_blob_storage/test_cluster.py::test_union_all test_storage_azure_blob_storage/test_cluster.py::test_unset_skip_unavailable_shards test_storage_hudi/test.py::test_multiple_hudi_files test_storage_hudi/test.py::test_single_hudi_file test_storage_hudi/test.py::test_types 'test_storage_iceberg/test.py::test_cluster_table_function[azure-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[azure-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[hdfs-2]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-1]' 'test_storage_iceberg/test.py::test_cluster_table_function[s3-2]' 'test_storage_iceberg/test.py::test_delete_files[azure-1]' 'test_storage_iceberg/test.py::test_delete_files[azure-2]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-1]' 'test_storage_iceberg/test.py::test_delete_files[hdfs-2]' 'test_storage_iceberg/test.py::test_delete_files[local-1]' 'test_storage_iceberg/test.py::test_delete_files[local-2]' 'test_storage_iceberg/test.py::test_delete_files[s3-1]' 'test_storage_iceberg/test.py::test_delete_files[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_complex[s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[False-s3-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-azure-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-hdfs-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-local-2]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-1]' 'test_storage_iceberg/test.py::test_evolved_schema_simple[True-s3-2]' 'test_storage_iceberg/test.py::test_filesystem_cache[s3]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_format_with_uuid[s3-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[azure-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[hdfs-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[local-2]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-1]' 'test_storage_iceberg/test.py::test_metadata_file_selection[s3-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[azure-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[hdfs-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[local-2]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-1]' 'test_storage_iceberg/test.py::test_multiple_iceberg_files[s3-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[azure-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[hdfs-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[local-2]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-1]' 'test_storage_iceberg/test.py::test_not_evolved_schema[s3-2]' 'test_storage_iceberg/test.py::test_partition_by[azure-1]' 'test_storage_iceberg/test.py::test_partition_by[azure-2]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-1]' 'test_storage_iceberg/test.py::test_partition_by[hdfs-2]' 'test_storage_iceberg/test.py::test_partition_by[local-1]' 'test_storage_iceberg/test.py::test_partition_by[local-2]' 'test_storage_iceberg/test.py::test_partition_by[s3-1]' 'test_storage_iceberg/test.py::test_partition_by[s3-2]' test_storage_iceberg/test.py::test_restart_broken_s3 'test_storage_iceberg/test.py::test_row_based_deletes[azure]' 'test_storage_iceberg/test.py::test_row_based_deletes[hdfs]' -vvv" altinityinfra/integration-tests-runner:cd6390247eca ' returned non-zero exit status 1.